Author: contact

  • Using language to give robots a better grasp of an open-ended world

    <p>Imagine you’re visiting a friend abroad, and you look inside their fridge to see what would make for a great breakfast. Many of the items initially appear foreign to you, with each one encased in unfamiliar packaging and containers. Despite these visual distinctions, you begin to understand what each one is used for and pick them up as needed.</p>

    <p>Inspired by humans’ ability to handle unfamiliar objects, a group from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) designed Feature Fields for Robotic Manipulation (<a href=”https://arxiv.org/abs/2308.07931″ target=”_blank”>F3RM</a>), a system that blends 2D images with foundation model features into 3D scenes to help robots identify and grasp nearby items. F3RM can interpret open-ended language prompts from humans, making the method helpful in real-world environments that contain thousands of objects, like warehouses and households.</p>

    <p>F3RM offers robots the ability to interpret open-ended text prompts using natural language, helping the machines manipulate objects. As a result, the machines can understand less-specific requests from humans and still complete the desired task. For example, if a user asks the robot to “pick up a tall mug,” the robot can locate and grab the item that best fits that description.</p>

    <p>“Making robots that can actually generalize in the real world is incredibly hard,” says Ge Yang, postdoc at the National Science Foundation AI Institute for Artificial Intelligence and Fundamental Interactions and MIT CSAIL. “We really want to figure out how to do that, so with this project, we try to push for an aggressive level of generalization, from just three or four objects to anything we find in MIT’s Stata Center. We wanted to learn how to make robots as flexible as ourselves, since we can grasp and place objects even though we’ve never seen them before.”</p>

    <p><strong>Learning “what’s where by looking”</strong><br />
    <br />
    The method could assist robots with picking items in large fulfillment centers with inevitable clutter and unpredictability. In these warehouses, robots are often given a description of the inventory that they’re required to identify. The robots must match the text provided to an object, regardless of variations in packaging, so that customers’ orders are shipped correctly.<br />
    <br />
    For example, the fulfillment centers of major online retailers can contain millions of items, many of which a robot will have never encountered before. To operate at such a scale, robots need to understand the geometry and semantics of different items, with some being in tight spaces. With F3RM’s advanced spatial and semantic perception abilities, a robot could become more effective at locating an object, placing it in a bin, and then sending it along for packaging. Ultimately, this would help factory workers ship customers’ orders more efficiently.</p>

    <p>“One thing that often surprises people with F3RM is that the same system also works on a room and building scale, and can be used to build simulation environments for robot learning and large maps,” says Yang. “But before we scale up this work further, we want to first make this system work really fast. This way, we can use this type of representation for more dynamic robotic control tasks, hopefully in real-time, so that robots that handle more dynamic tasks can use it for perception.”</p>

    <p>The MIT team notes that F3RM’s ability to understand different scenes could make it useful in urban and household environments. For example, the approach could help personalized robots identify and pick up specific items. The system aids robots in grasping their surroundings — both physically and perceptively.</p>

    <p>“Visual perception was defined by David Marr as the problem of knowing ‘what is where by looking,’” says senior author Phillip Isola, MIT associate professor of electrical engineering and computer science and CSAIL principal investigator. “Recent foundation models have gotten really good at knowing what they are looking at; they can recognize thousands of object categories and provide detailed text descriptions of images. At the same time, radiance fields have gotten really good at representing where stuff is in a scene. The combination of these two approaches can create a representation of what is where in 3D, and what our work shows is that this combination is especially useful for robotic tasks, which require manipulating objects in 3D.”<br />
    <br />
    <strong>Creating a “digital twin”</strong></p>

    <p>F3RM begins to understand its surroundings by taking pictures on a selfie stick. The mounted camera snaps 50 images at different poses, enabling it to build a <a href=”https://www.matthewtancik.com/nerf”>neural radiance field</a> (NeRF), a deep learning method that takes 2D images to construct a 3D scene. This collage of RGB photos creates a “digital twin” of its surroundings in the form of a 360-degree representation of what’s nearby.</p>

    <p>In addition to a highly detailed neural radiance field, F3RM also builds a feature field to augment geometry with semantic information. The system uses <a href=”https://openai.com/research/clip”>CLIP</a>, a vision foundation model trained on hundreds of millions of images to efficiently learn visual concepts. By reconstructing the 2D CLIP features for the images taken by the selfie stick, F3RM effectively lifts the 2D features into a 3D representation.<br />
    <br />
    <strong>Keeping things open-ended</strong></p>

    <p>After receiving a few demonstrations, the robot applies what it knows about geometry and semantics to grasp objects it has never encountered before. Once a user submits a text query, the robot searches through the space of possible grasps to identify those most likely to succeed in picking up the object requested by the user. Each potential option is scored based on its relevance to the prompt, similarity to the demonstrations the robot has been trained on, and if it causes any collisions. The highest-scored grasp is then chosen and executed.<br />
    <br />
    To demonstrate the system’s ability to interpret open-ended requests from humans, the researchers prompted the robot to pick up Baymax, a character from Disney’s “Big Hero 6.” While F3RM had never been directly trained to pick up a toy of the cartoon superhero, the robot used its spatial awareness and vision-language features from the foundation models to decide which object to grasp and how to pick it up.<br />
    <br />
    F3RM also enables users to specify which object they want the robot to handle at different levels of linguistic detail. For example, if there is a metal mug and a glass mug, the user can ask the robot for the “glass mug.” If the bot sees two glass mugs and one of them is filled with coffee and the other with juice, the user can ask for the “glass mug with coffee.” The foundation model features embedded within the feature field enable this level of open-ended understanding.</p>

    <p>“If I showed a person how to pick up a mug by the lip, they could easily transfer that knowledge to pick up objects with similar geometries such as bowls, measuring beakers, or even rolls of tape. For robots, achieving this level of adaptability has been quite challenging,” says MIT PhD student, CSAIL affiliate, and co-lead author William Shen. “F3RM combines geometric understanding with semantics from foundation models trained on internet-scale data to enable this level of aggressive generalization from just a small number of demonstrations.”<br />
    <br />
    Shen and Yang wrote the paper under the supervision of Isola, with MIT professor and CSAIL principal investigator Leslie Pack Kaelbling and undergraduate students Alan Yu and Jansen Wong as co-authors. The team was supported, in part, by Amazon.com Services, the National Science Foundation AI Institute for Artificial Intelligence Fundamental Interactions, the Air Force Office of Scientific Research, the Office of Naval Research’s Multidisciplinary University Initiative, the Army Research Office, the MIT-IBM Watson AI Lab, and the MIT Quest for Intelligence. Their work will be presented at the 2023 Conference on Robot Learning.</p>

  • 3 Questions: A roadmap toward circularity in the footwear industry

    <p><em>In March 2022, representatives of global footwear brands gathered with sustainability experts and other academic researchers on the MIT campus. The mission? To kick-start a discussion on addressing the waste produced by the footwear industry. An MIT research project arising from the Circular Shoe Systems Summit has now resulted in a white paper titled the </em><a href=”https://fabric-ideas.mit.edu/connect/” target=”_blank”><em>Footwear Manifesto</em></a><em>. The report is co-authored by Yuly Fuentes-Medel, as MIT’s <a href=”https://fabric-ideas.mit.edu/” target=”_blank”>Fabric Innovation Hub</a> manager and now program director of textiles at MIT Climate Grand Challenges; Leslie Yan ’22, who majored in mechanical engineering and design; and collaborator Karo Buttler, who graduated last year from the Fashion Institute of Technology in New York.</em></p>

    <p><em>In the months following the summit, the team conducted in-depth interviews and a broad survey of the industry to bring the Footwear Manifesto to life. The report aims to provide a global overview of current barriers and challenges to achieving a more sustainable and circular footwear industry&nbsp;— that is, one that incorporates reuse, recovery, recycling, and regeneration to cut down on waste. </em><em>In a conversation prepared for </em>MIT News<em>, Fuentes-Medel, Yan, and Buttler reflect on their findings.</em> <em>Fuentes-Medel also suggests an opportunity for pre-competitive collaborative research to move the needle on climate action and circularity in the footwear industry.</em></p>

    <p><strong>Q: </strong>What are the main findings in this report?</p>

    <p><strong>Fuentes-Medel: </strong>In building this report, we spoke to more than 15 footwear companies and numerous external stakeholders about the status quo of sustainability in the sector. The document identifies opportunities to steer the industry towards environmentally sound solutions across several aspects of the business: materials management, post-consumer infrastructure, consumer behavior, and implementation of circular business models. Our survey, interviews, and analysis identified several key findings:</p>

    <ul>
    <li>Today, the complexity of shoes — the design complexity, manufacturing process, and temporal use — is the biggest challenge to circularity at scale.</li>
    <li>Companies have the will to change, but there is little strategic alignment on circularity across the footwear industry, and no common means of measuring progress.</li>
    <li>Collaboration is vital for scaling circular systems — not just within the brands, but with a multitude of stakeholders including suppliers, infrastructure investors, government, academia, and entrepreneurs.</li>
    <li>Building a circular dynamic system will require diverse solutions that open opportunities across sectors and communities.</li>
    </ul>

    <p><strong>Q: </strong>Why shoes? What are the unique challenges facing the footwear industry?</p>

    <p><strong>Fuentes-Medel: </strong>Everyone wears shoes. Whether it’s for protection, comfort and support, performance, or self-expression, footwear plays a major role in getting us where we need to go. Most people, however, have no idea what happens to their shoes when they’ve finished using them. The linear take-make-waste model of the footwear industry is deeply unsustainable from a social, economic, and environmental perspective.</p>

    <p>A fundamental barrier to achieving a circular shoe system lies with the complexity of shoes themselves, as reflected in their design, material utilization, and manufacturing processes. For example, the intricate multi-material construction of shoes, consisting of dozens of individual components, makes it nearly impossible at present to recycle and reintegrate used shoes back into the supply chain.</p>

    <p>However, the challenges faced by the industry extend beyond the shoe itself. Achieving greater sustainability in footwear necessitates a dedicated commitment to resource stewardship. A shoe designed to be recyclable or biodegradable cannot genuinely be considered “circular” unless there is a robust logistical infrastructure that oversees the collection, sorting, processing, and disassembly of the shoe at the end of its life. This infrastructure is essential for enabling the reuse, recycling, or composting of the finished product. Achieving progress in sustainable manufacturing and consumption requires extensive coordination among a wide network of suppliers, buyers, and material management entities. If such system existed, it could create new jobs and opportunities for value creation.</p>

    <p>I have continued to collaborate with a group of participants from the summit to advance that vision by establishing — outside of MIT — a consortium of brands and industry partners that will be known as <a href=”https://earthdna.org/home/the-footwear-collective/” target=”_blank”>The Footwear Collective</a>, which will exist to tackle some of these challenges related to circularity under the sponsorship of the nonprofit EarthDNA. Embracing the circular economy will require an inclusive shift in behavior from mere sharing to active participation. All footwear brands are invited to participate.</p>

    <p><strong>Q: </strong>Leslie, Karo, as students working on this project, what were some of your biggest takeaways?</p>

    <p><strong>Yan:</strong> Coming from a design and engineering background, I was initially drawn to investigating the sustainability challenges inherent in the way shoes are constructed and produced. However, it became evident that the environmental burden of the industry is also a product of barriers posed by the complex system in which shoes are made, sold, and consumed. Still, this complexity opens up multiple pathways towards greener practices, ranging from materials to consumer behavior&nbsp;— each of which we outline in our report. It was truly exciting to see the determination and enthusiasm of many in the industry towards taking meaningful steps towards a more sustainable future for footwear.</p>

    <p><strong>Buttler:</strong> Our goal for this report was to curate and centralize already-existing information for the footwear industry. During that process, we recognized the necessity of collaboration among brands and various industries. As a young designer, the excitement for more sustainable practices was very inspiring, and I can’t wait to see where the footwear industry will be in 5-10 years.</p>

  • Shape-shifting fiber can produce morphing fabrics

    <p>Instead of needing a coat for each season, imagine having a jacket that would dynamically change shape so it becomes more insulating to keep you warm as the temperature drops.</p>

    <p>A programmable, actuating fiber developed by an interdisciplinary team of MIT researchers could someday make this vision a reality. Known as <a href=”https://www.media.mit.edu/projects/fiberobo/overview/” target=”_blank”>FibeRobo</a>, the fiber contracts in response to an increase in temperature, then self-reverses when the temperature decreases, without any embedded sensors or other hard components.</p>
    <p>The low-cost fiber is fully compatible with textile manufacturing techniques, including weaving looms, embroidery, and industrial knitting machines, and can be produced continuously by the kilometer. This could enable designers to easily incorporate actuation and sensing capabilities into a wide range of fabrics for myriad applications.</p>

    <p>The fibers can also be combined with conductive thread, which acts as a heating element when electric current runs through it. In this way, the fibers actuate using electricity, which offers a user digital control over a textile’s form. For instance, a fabric could change shape based on any piece of digital information, such as readings from a heart rate sensor.</p>

    <p>“We use textiles for everything. We make planes with fiber-reinforced composites, we cover the International Space Station with a radiation-shielding fabric, we use them for personal expression and performance wear. So much of our environment is adaptive and responsive, but the one thing that needs to be the most adaptive and responsive — textiles — is completely inert,” says Jack Forman, a graduate student in the Tangible Media Group of the MIT Media Lab, with a secondary affiliation at the Center for Bits and Atoms, and lead author of a&nbsp;<a href=”https://dl.acm.org/doi/10.1145/3586183.3606732″ target=”_blank” title=”https://dl.acm.org/doi/10.1145/3586183.3606732″>paper on the actuating fiber</a>.</p>

    <p>He is joined on the paper by 11 other researchers at MIT and Northeastern University, including his advisors, Professor Neil Gershenfeld, who leads the Center for Bits and Atoms, and Hiroshi Ishii, the Jerome B. Wiesner Professor of Media Arts and Sciences and director of the Tangible Media Group.<strong> </strong>The research will be presented at the ACM Symposium on User Interface Software and Technology.</p>

    <p><strong>Morphing materials</strong></p>

    <p>The MIT researchers wanted a fiber that could actuate silently and change its shape dramatically, while being compatible with common textile manufacturing procedures. To achieve this, they used a material known as liquid crystal elastomer (LCE).</p>

    <p>A liquid crystal is a series of molecules that can flow like liquid, but when they’re allowed to settle, they stack into a periodic crystal arrangement. The researchers incorporate these crystal structures into an elastomer network, which is stretchy like a rubber band.</p>

    <p>As the LCE material heats up, the crystal molecules fall out of alignment and pull the elastomer network together, causing the fiber to contract. When the heat is removed, the molecules return to their original alignment, and the material to its original length, Forman explains.</p>

    <p>By carefully mixing chemicals to synthesize the LCE, the researchers can control the final properties of the fiber, such as its thickness or the temperature at which it actuates.</p>

    <p>They perfected a preparation technique that creates LCE fiber which can actuate at skin-safe temperatures, making it suitable for wearable fabrics.</p>

    <p>“There are a lot of knobs we can turn. It was a lot of work to come up with this process from scratch, but ultimately it gives us a lot of freedom for the resulting fiber,” he adds.</p>

    <p>However, the researchers discovered that making fiber from LCE resin is a finicky process. Existing techniques often result in a fused mass that is impossible to unspool.</p>

    <p>Researchers are also exploring other ways to make functional fibers, such as by <a href=”https://news.mit.edu/2021/programmable-fiber-0603″ target=”_blank”>incorporating hundreds of microscale digital chips</a> into a polymer, <a href=”https://news.mit.edu/2021/fibers-breath-regulating-1015″ target=”_blank”>utilizing an activated fluidic system</a>, or including piezoelectric material that can <a href=”https://news.mit.edu/2022/fabric-acoustic-microphone-0316″ target=”_blank”>convert sound vibrations into electrical signals</a>.</p>

    <p><strong>Fiber fabrication</strong></p>

    <p>Forman built a machine using 3D-printed and laser-cut parts and basic electronics to overcome the fabrication challenges. He initially built the machine as part of the graduate-level course MAS.865 (Rapid-Prototyping of Rapid-Prototyping Machines:&nbsp;How to Make Something that Makes [almost] Anything).</p>

    <p>To begin, the thick and viscous LCE resin is heated, and then slowly squeezed through a nozzle like that of a glue gun. As the resin comes out, it is cured carefully using UV lights that shine on both sides of the slowly extruding fiber.</p>

    <p>If the light is too dim, the material will separate and drip out of the machine, but if it is too bright, clumps can form, which yields bumpy fibers.</p>

    <p>Then the fiber is dipped in oil to give it a slippery coating and cured again, this time with UV lights turned up to full blast, creating a strong and smooth fiber. Finally, it is collected into a top spool and dipped in powder so it will slide easily into machines for textile manufacturing.</p>

    <p>From chemical synthesis to finished spool, the process takes about a day and produces approximately a kilometer of ready-to-use fiber.</p>

    <p>“At the end of the day, you don’t want a diva fiber. You want a fiber that, when you are working with it, falls into the ensemble of materials — one that you can work with just like any other fiber material, but then it has a lot of exciting new capabilities,” Forman says.</p>

    <p>Creating such a fiber took a great deal of trial and error, as well as the collaboration of researchers with expertise in many disciplines, from chemistry to mechanical engineering to electronics to design.</p>

    <p>The resulting fiber, called FibeRobo, can contract up to 40 percent without bending, actuate at skin-safe temperatures (the skin-safe version of the fiber contracts up to about 25 percent), and be produced with a low-cost setup for 20 cents per meter, which is about 60 times cheaper than commercially available shape-changing fibers.</p>

    <p>The fiber can be incorporated into industrial sewing and knitting machines, as well as nonindustrial processes like hand looms or manual crocheting, without the need for any process modifications.</p>

    <p>The MIT researchers used FibeRobo to demonstrate several applications, including an adaptive sports bra made by embroidery that tightens when the user begins exercising.</p>

    <p>They also used an industrial knitting machine to create a compression jacket for Forman’s dog, whose name is Professor. The jacket would actuate and “hug” the dog based on a Bluetooth signal from Forman’s smartphone. Compression jackets are commonly used to alleviate the separation anxiety a dog can feel while its owner is away.</p>

    <p>In the future, the researchers want to adjust the fiber’s chemical components so it can be recyclable or biodegradable. They also want to streamline the polymer synthesis process so users without wet lab expertise could make it on their own.</p>

    <p>Forman is excited to see the FibeRobo applications other research groups identify as they build on these early results. In the long run, he hopes FibeRobo can become something a maker could buy in a craft store, just like a ball of yarn, and use to easily produce morphing fabrics.</p>

    <p>“LCE fibers come to life when integrated into functional textiles. It is particularly fascinating to observe how the authors have explored creative textile designs using a variety of weaving and knitting patterns,” says Lining Yao, the Cooper-Siegel Associate Professor of Human Computer Interaction at Carnegie Mellon University, who was not involved with this work.</p>

    <p>This research was supported, in part, by the William Asbjornsen Albert Memorial Fellowship, the Dr. Martin Luther King Jr. Visiting Professor Program, Toppan Printing Co., Honda Research, Chinese Scholarship Council, and Shima Seiki. The team included Ozgun Kilic Afsar, Sarah Nicita, Rosalie (Hsin-Ju) Lin, Liu Yang, Akshay Kothakonda, Zachary Gordon, and Cedric Honnet at MIT; and Megan Hofmann and Kristen Dorsey at Northeastern University.</p>

  • Morris Chang ’52, SM ’53 describes the secrets of semiconductor success

    <p>Groundbreaking technologist Morris Chang ’52, SM ’53 discussed the key elements behind Taiwan’s long-term ascendancy in semiconductor manufacturing, while speaking to a large campus audience in an MIT talk on Tuesday.</p>

    <p></p>

    <p>Chang is the influential founder and former longtime head of TSMC, the Taiwan Semiconductor Manufacturing Company, which has become the world’s leading microchip maker. Chang started the firm in 1987, and since then it has helped reshape the industry by making Taiwan a crucial center of production and by focusing on manufacturing while chip design occurs elsewhere.</p>

    <p></p>

    <p>In his remarks, Chang, whose career spans the history of the semiconductor industry, gave a broad overview of the development of chip manufacturing, then focused on some of the factors that have helped TSMC and Taiwan thrive. In his view, this includes a healthy supply of talent, in the form of engineers and other technical employees willing to work in manufacturing; low turnover of employees; a geographical concentration of industry manufacturing in Taiwan; and the “experience curve theory,” in which accumulated manufacturing experience leads to lower production costs.</p>

    <p></p>

    <p>“Why is TSMC successful in Taiwan?” asked Chang. “Because TSMC also gets good, well-trained technicians, and even well-trained operators from a lot of trade schools in Taiwan. … Their students aspire to make a good living as technicians.”</p>

    <p></p>

    <p>Chang also argued that the process of learning by doing, in which manufacturers can reduce costs by improving their processes, is predicated on having production centered in one place, with connectivity among workers in common conditions.</p>

    <p></p>

    <p>“It works, the learning curve, the experience curve, it works only when you have a common location,” Chang said. “Learning is local.”</p>

    <p></p>

    <p>More broadly, Chang noted, prominence in semiconductor manufacturing “seems to be related to the status of economic development of that country. Frankly, the advantages that Taiwan enjoys today … were enjoyed by the U.S. in the ’50s and ’60s.” Decades in the future, he suggested, there could be a rise in semiconductor activity in India, Vietnam, or Indonesia, depending on the way circumstances evolve.</p>

    <p></p>

    <p>Chang’s talk, “Lessons of a Life in Semiconductor Manufacturing, from Texas to Taiwan,” was delivered to a capacity audience of more than 425 people in MIT’s Room 10-250. The event was part of the Manufacturing@MIT Distinguished Speaker Series.</p>

    <p></p>

    <p>Chang was born in China in 1931, left the country in the late 1940s, and earned his BS and MS in mechanical engineering from MIT. He later received a PhD in Electrical Engineering from Stanford University. Chang entered the semiconductor business by taking a job at Sylvania in the mid-1950s. He moved to Texas Instruments — then a chip-making power — in 1958, rising to industry prominence during a quarter-century tenure there. Chang also became a U.S. citizen in 1962. In the 1980s, he was invited to work on the development of industrial and technology in Taiwan, and soon thereafter launched TSMC. Over time, TSMC has become a juggernaut, sustaining its success and growing to become the world leader in the field.</p>

    <p></p>

    <p>At Tuesday’s event, Chang was introduced by MIT Provost Cynthia Barnhart, who emphasized the enduring connections he has built at the Institute. Chang is a life member emeritus of the MIT Corporation and has been an important supporter of the Institute. The renovated Building E52, which houses the Department of Economics, the Samberg Conference Center, and offices of the MIT Sloan School of Management, is now the Morris and Sophie Chang Building.</p>

    <p></p>

    <p>“For MIT, Morris is an extraordinary example of the lasting impact our alumni have on the legacy of innovation at the Institute,” Barnhart said.</p>

    <p></p>

    <p>At the end of his talk, Chang also briefly discussed geopolitics and chip production. With U.S.-China relations relatively strained, chip manufacturing is one of the areas where they are competing in industrial terms. Meanwhile, the precise nature of China’s policy intentions vis-à-vis Taiwan remain uncertain.</p>

    <p></p>

    <p>“Without national security, we will lose everything, everything that we value,” Chang said. At the same time, he noted, “By all means, let’s avoid even a Cold War, if we can.”</p>

    <p></p>

    <p>The event was hosted by the Manufacturing@MIT Working Group, in collaboration with the School of Engineering, the Department of Mechanical Engineering, the Department of Political Science, Leaders for Global Operations, Microsystems Technology Laboratories, the Industrial Performance Center, MIT.nano, Machine Intelligence for Manufacturing and Operations, the Laboratory for Manufacturing and Productivity, and Mission Innovation X.</p>

  • Teaching students about photonics to build up the US workforce

    <p>In 2019, Kevin McComber ’05, PhD ’11 was at MIT working on integrated photonics — chip-based devices that send and receive signals using light — and set out to hire someone to design photonic chips. The experience made him realize just how little expertise the U.S. workforce has in integrated photonics design, a problem that’s part of a broader shortage of workers in semiconductor manufacturing.</p>

    <p>The insight would change the trajectory of McComber’s career.</p>

    <p>Despite having no background in photonics design at the time, McComber decided to leave MIT and start a photonics design services company called Spark Photonics. To directly address the talent shortage he witnessed, McComber and his co-founder, Al Kapoor, launched the Spark Photonics Foundation, a nonprofit that teaches K-12 and college students about concepts in STEM and advanced manufacturing, using semiconductors and photonics technologies.</p>

    <p>Over the last two years, the Spark Photonics Foundation has facilitated its project-based learning program, called SparkAlpha, with more than 400 students across Massachusetts.</p>

    <p>“The Foundation came from recognizing the need for workers and thinking long-term about what makes the most sense, not just for us but for the country, to address the gap in photonics and in semiconductor manufacturing more broadly,” McComber says.</p>

    <p>The SparkAlpha program exposes students to the applications of integrated photonics — which includes areas such as chemical sensing and machine vision — and challenges them to conceptualize creative photonics-based products to solve problems. It also involves trips to local colleges and companies working in the semiconductor supply chain. At the end of the program, students pitch their ideas in an event McComber compares to the television show “Shark Tank.”</p>

    <p>“It gets students outside of the classroom, and that’s what we’re finding is a big value, especially to schools that are struggling to come up with relevant 21st-century curricula,” McComber says. “We can come in and say we’re a company that needs people to do this, and we’re going to get you in touch with other companies that have that need as well. Most K-12 schools and even many colleges struggle to make those connections.”</p>

    <p>In addition to teaching students, the foundation is trying to educate communities and build connections between K-12 schools, industry, and local colleges.</p>

    <p>“People tend to think of manufacturing as a career for students who can’t think, or who can only do repetitive stuff with their hands,” McComber says. “That’s definitely not the case. I know this because I worked in the manufacturing industry before starting Spark. We’re changing the conversations around manufacturing with teachers, guidance counselors, principals, parents — the students’ entire sphere of influencers.”</p>

    <p><strong>Taking the leap</strong></p>

    <p>McComber’s journey to MIT started long before he submitted an application to the school.</p>

    <p>“In third grade I realized I love math,” McComber explains. “A teacher told me, ‘If you love math, you should be an engineer, and if you want to be an engineer you should go to MIT.’ By 7th grade, I decided I wanted to go to MIT for sure.”</p>

    <p>McComber majored in materials science and engineering at MIT. During that time, he gravitated toward working with his hands. When he decided to stay at MIT for his PhD under Professor Lionel Kimerling, he started tinkering with Kimerling’s photonics machines. The tinkering never really stopped, and over the next six years McComber went on to earn his doctorate in integrated photonics fabrication, studying the production of chip-scale devices for light manipulation and detection.</p>

    <p>By the time McComber graduated in 2011, he had spent 10 years at MIT and thought he was leaving for good. He spent the next few years working on chip manufacturing at Intel before moving to business consulting roles.</p>

    <p>Then in 2018, he got a call from Kimerling. MIT had begun an integrated photonics workforce program and Kimerling wanted McComber to join the team. He couldn’t resist.</p>

    <p>The work was part of a federally funded program called AIM Photonics, whose mission is to advance the integrated photonics industry in the U.S.</p>

    <p>“Part of the position was starting design services in integrated photonics because no such services existed in the U.S.,” McComber recalls. “I wasn’t skilled in design, so I went around asking other people to start a design firm. Nobody wanted to.”</p>

    <p>With Kimerling’s blessing, McComber decided to leave MIT and start a photonics design company with Kapoor.</p>

    <p>Since then, Spark Photonics has helped companies of all sizes with the design of integrated photonics. The company has also expanded its services by partnering with organizations offering design software and fabrication facilities, providing end-to-end services for scalable integrated photonics design.</p>

    <p>The work made McComber acutely aware of the shortages in integrated photonics talent, and he thought of a way to complement the workforce training being done at MIT.</p>

    <p><strong>“</strong>There’s this huge funnel of prospective manufacturing talent that gets whittled down in elementary, middle, and high school, and then it gets super small in college and grad school,” McComber explains. “Instead of fighting over a tiny piece at the end of that funnel, we wanted to go upstream and address the big part. There are large sections of that, like underrepresented minorities and girls, who don’t traditionally get addressed by the STEM funnel, so we said if we can move the needle at that higher point, we could really make a big difference.”</p>

    <p>The Spark Photonics Foundation’s work started in Massachusetts with a grant from the federal government in 2021. That fall, McComber added Spark’s programming to an entrepreneurship class in Chelmsford High School. Spark’s first employee ran the next program in an introduction to engineering class at Lowell High School. As part of the programs, students visited Middlesex Community College and the precision electronics company Mycronic in Tewksbury, Massachusetts, as they crafted their solutions.</p>

    <p>At the end, students said the curriculum increased their knowledge of advanced manufacturing careers and made them more interested in STEM fields. The teachers also found Spark’s program to be a practical scaffolding to teach more abstract concepts.</p>

    <p>“In Lowell, the teacher at first was very resistive because she has to teach a bunch of things to get through the curriculum,” McComber says. “But once she got into it, she found she could use this tangible program to layer in all the other concepts she was teaching. By week three, she had pretty much thrown out her planned curriculum and used ours as the platform to make things tangible for students. That’s not an uncommon occurrence.”</p>

    <p>In total, the program has run in 22 classrooms so far, with the main limiting factor being that Spark’s team still has to travel to schools to put it on. The company recently received another federal grant that should accelerate its ability to scale.</p>

    <p><strong>Lessons go virtual</strong></p>

    <p>On Sept. 27, the Department of Defense announced it had awarded the Spark Photonics Foundation funding to create a virtual teacher training program to scale its curriculum across the country. The foundation has already begun hosting professional development workshops with K-12 teachers, held at Western New England University in Springfield, Massachusetts, where they learn about the photonics industry and possible career paths for students.</p>

    <p>“When we step into a classroom, we don’t know the class, but the teacher does,” McComber says. “They know which students have a hard time understanding materials presented visually, or which ones have English language learning needs, or which ones are on individual education plans. We think that through this new model not only can we distribute it to more teachers, because we’re training them through Zoom, but they can then tailor it to their classrooms.”</p>

    <p>Spark has also launched a second program, called SparkBeta, in which students learn the Python coding language and use it to design photonic chips.</p>

    <p>“In SparkBeta, the students actually do the design after about six hours of instruction,” McComber says. “It’s approachable for everybody, and we think the program has a great future because it melds the design and education in our business, and it breaks down a lot of barriers to get students thinking about education and career paths they’ve never thought about before.”</p>

    <p>That gets at a key goal for the foundation: to not only equip students with skills but also put advanced manufacturing on the map for schools around the country.</p>

    <p>“We’re changing the conversation because teachers might never have said the word photonics or even semiconductors had we not been there,” McComber says. “Now they’re seeing this is supported by industry and supported by colleges. They’re seeing this is a national priority to build up a workforce in semiconductors.”</p>

  • A new way to integrate data with physical objects

    <p>To get a sense of what <a href=”http://groups.csail.mit.edu/hcie/files/research-projects/structcode/2023-SCF-StructCode-paper.pdf” target=”_blank”>StructCode</a> is all about, says Mustafa Doğa Doğan, think of Superman. Not the “faster than a speeding bullet” and “more powerful than a locomotive” version, but a Superman, or Superwoman, who sees the world differently from ordinary mortals — someone who can look around a room and glean all kinds of information about ordinary objects that is not apparent to people with less penetrating faculties.</p>

    <p>That, in a nutshell, is “the high-level idea behind StructCode,” explains Doğan, a PhD student in electrical engineering and computer science at MIT and an affiliate of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). “The goal is to change the way we interact with objects” — to make those interactions more meaningful and more meaning-laden — “by embedding information into objects in ways that can be readily accessed.”</p>

    <p>StructCode grew out of an effort called InfraredTags, which Doğan and other colleagues introduced in 2022. That work, as well as the current project, was carried out in the laboratory of MIT Associate Professor Stefanie&nbsp;Mueller — Doğan’s advisor, who has taken part in both projects. In last year’s approach, “invisible” tags — that can only be seen with cameras capable of detecting infrared light — were used to reveal information about physical objects. The drawback there was that many cameras cannot perceive infrared light. Moreover, the method for fabricating these objects and affixing the tags to their surfaces relied on 3D printers, which tend to be very slow and often can only make objects that are small.</p>
    <p>StructCode, at least in its original version, relies on objects produced with laser-cutting techniques that can be manufactured within minutes, rather than the hours it might take on a 3D printer. Information can be extracted from these objects, moreover, with the RGB cameras that are commonly found in smartphones; the ability to operate in the infrared range of the spectrum is not required.</p>

    <p>In their initial demonstrations of the idea, the MIT-led team decided to construct their objects out of wood, making pieces such as furniture, picture frames, flowerpots, or toys that are well suited to laser-cut fabrication. A key question that had to be resolved was this: How can information be stored in a way that is unobtrusive and durable, as compared to externally-attached bar codes and QR codes, and also will not undermine an object’s structural integrity?</p>

    <p>The solution that the team has come up with, for now, is to rely on joints, which are ubiquitous in wooden objects made out of more than one component. Perhaps the most familiar is the finger joint, which has a kind of zigzag pattern whereby two wooden pieces adjoin at right angles such that every protruding “finger” along the joint of the first piece fits into a corresponding “gap” in the joint of the second piece and, similarly, every gap in the joint of the first piece is filled with a finger from the second.</p>

    <p>“Joints have these repeating features, which are like repeating bits,” Dogan says. To create a code, the researchers slightly vary the length of the gaps or fingers. A standard size length is accorded a 1. A slightly shorter length is assigned a 0, and a slightly longer length is assigned a 2. The encoding scheme is based on the sequence of these numbers, or bits, that can be observed along a joint. For every string of four bits, there are 81 (3<sup>4</sup>) possible variations.</p>

    <p>The team also demonstrated ways of encoding messages in “living hinges” — a kind of joint that is made by taking a flat, rigid piece of material and making it bendable by cutting a series of parallel, vertical lines. As with the finger joints, the distance between these lines can be varied: 1 being the standard length, 0 being a slightly shorter length, and 2 being slightly longer. And in this way, a code can be assembled from an object that contains a living hinge.</p>

    <p>The idea is described in a paper, “<a href=”http://groups.csail.mit.edu/hcie/files/research-projects/structcode/2023-SCF-StructCode-paper.pdf” target=”_blank”>StructCode: Leveraging Fabrication Artifacts to Store Data in Laser-Cut Objects</a>,” that was presented this month at the 2023 ACM Symposium on Computational Fabrication in New York City. Doğan, the paper’s first author, is joined by Mueller and four coauthors — recent MIT alumna Grace Tang ’23, MNG ’23; MIT undergraduate Richard Qi; University of California at Berkeley graduate student Vivian Hsinyueh Chan; and Cornell University Assistant Professor Thijs Roumen.</p>

    <p>“In the realm of materials and design, there is often an inclination to associate novelty and innovation with entirely new materials or manufacturing techniques,” notes Elvin Karana, a professor of materials innovation and design at the Delft University of Technology. One of the things that impresses Karana most about StructCode is that it provides a novel means of storing data by “applying a commonly used technique like laser cutting and a material as ubiquitous as wood.”</p>

    <p>The idea for StructCode, adds University of Colorado computer scientist Ellen Yi-Luen Do, “is “simple, elegant, and totally makes sense. It’s like having the Rosetta Stone to help decipher Egyptian hieroglyphs.”</p>

    <p>Patrick Baudisch, a computer scientist at the Hasso Plattner Institute in Germany, views StructCode as “a great step forward for personal fabrication.&nbsp;It takes a key piece of functionality&nbsp;that’s only offered today for mass-produced goods and brings it to custom objects.”</p>

    <p>Here, in brief, is how it works: First, a laser cutter — guided by a model created via StructCode — fabricates an object into which encoded information has been embedded. After downloading a StructCode app, an user can decode the hidden message by pointing a cellphone camera at the object, which can (aided by StructCode software) detect subtle variations in length found in an object’s outward-facing joints or living hinges.</p>

    <p>The process is even easier if the user is equipped with augmented reality glasses, Doğan says. “In that case, you don’t need to point a camera. The information comes up automatically.” And that can give people more of the “superpowers” that the designers of StructCode hope to confer.</p>

    <p>“The object doesn’t need to contain a lot of information,” Doğan adds. “Just enough — in the form of, say, URLs — to direct people to places they can find out what they need to know.”</p>

    <p>Users might be sent to a website where they can obtain information about the object — how to care for it, and perhaps eventually how to disassemble it and recycle (or safely dispose of) its contents. A flowerpot that was made with living hinges might inform a user, based on records that are maintained online, as to when the plant inside the pot was last watered and when it needs to be watered again. Children examining a toy crocodile could, through StructCode, learn scientific details about various parts of the animal’s anatomy. A picture frame made with finger joints modified by StructCode could help people find out about the painting inside the frame and about the person (or persons) who created the artwork — perhaps linking to a video of an artist talking about this work directly.</p>

    <p>“This technique could pave the way for new applications, such as interactive museum exhibits,” says Raf Ramakers, a computer scientist at Hasselt University in Belgium. “It holds the potential for broadening the scope of how we perceive and interact with everyday objects” — which is precisely the goal that motivates the work of Doğan and his colleagues.</p>

    <p>But StructCode is not the end of the line, as far as Doğan and his collaborators are concerned. The same general approach could be adapted to other manufacturing techniques besides laser cutting, and information storage doesn’t have to be confined to the joints of wooden objects. Data could be represented, for instance, in the texture of leather, within the pattern of woven or knitted pieces, or concealed by other means within an image. Doğan is excited by the breadth of available options and by the fact that their “explorations into this new realm of possibilities, designed to make objects and our world more interactive, are just beginning.”</p>

  • Internships fabricate a microelectronics future

    <p>Nestled among the diverse labs and prototyping facilities at MIT Lincoln Laboratory, the <a href=”https://www.ll.mit.edu/about/facilities/microelectronics-laboratory”>Microelectronics Laboratory</a> (ML) whirs away. Technicians in the ML fabricate advanced integrated circuits, which end up in systems that peer into the cosmos, observe weather from space, and power quantum computers — to name just a few uses. The ML is one of several facilities within Lincoln Laboratory’s <a href=”https://www.ll.mit.edu/r-d/advanced-technology/mpf”>Microsystems Prototyping Foundry</a> (MPF) operations.</p>

    <p>This summer, 12 students had a hand in MPF operations as part of the <a href=”https://www.ma-microelectronics.org/”>Massachusetts Microelectronics Internship Program</a>. The <a href=”https://cam.masstech.org/cam-programs/microelectronics”>Northeast Microelectronics Coalition</a> started this program last year to help train the next generation of microelectronics professionals. Accepted undergraduates are placed in the region’s leading microelectronics companies for 10 weeks. No prior experience is needed, only a willingness to learn.</p>

    <p>Donning “bunny suits,” the head-to-toe coverings staff wear to help keep the lab free from contaminants, the interns each conducted experiments to improve fabrication processes. The ML maintains an ultraclean certification, Class 10/ISO 4, meaning that the air contains a maximum of 10 particles (over 0.5 micron in size) per cubic foot. Even so, particles can still end up on the wafers — slices of semiconductor materials, like silicon, that form the base of circuits.</p>

    <p>”Particles can be device killers. Cleanrooms control and reduce the amount and size of particles,” says Peter Preston, an intern from Springfield Technical Community College (STCC). Over the summer, Preston assessed the number of particles present on wafers after they underwent specific processing steps and troubleshooted ways to lower that count. Fellow STCC student Travis Donelon sampled the ML’s water supply to test for bacteria, which can introduce particles to the water during rinsing. He also analyzed the flow of nitrogen gas, an essential material for keeping surfaces free of moisture and particles throughout every step of the fabrication process.</p>

    <p>In running these experiments, Preston and Donelon learned how to run each machine in the Compound Semiconductor Laboratory (CSL) within the MPF, training that could enable them to be “hired tomorrow as technicians,” says Scott Eastwood, the CSL operations manager.</p>

    <p>”Getting hands-on work with almost every tool used in the fabrication process was a great opportunity to see the big picture of what microelectronics is all about,” Donelon says.</p>

    <p>Dan Pulver, the ML group leader, says that the internships are helping the ML plot a route of growth. “We’re building skills and opportunity awareness, along with relationships, with more students who may go on to work in our group.” A recent self-assessment showed that the ML has the most retirement risk of all groups at Lincoln Laboratory — a finding that mirrors the microelectronics workforce, both regionally and nationally.&nbsp;</p>

    <p>Experts <a href=”https://www.semiconductors.org/turning-the-tide-for-semiconductor-manufacturing-in-the-u-s/#:~:text=U.%20S.%20Decline%3A%20U.S.%20companies%20account,is%20concentrated%20in%20East%20Asia.”>find</a> that the industry’s knowledge base is shrinking fast, as U.S. dominance in semiconductors has receded in recent decades. A chip shortage during the Covid-19 pandemic shone a light on the risks of relying on overseas microelectronics fabrication. The 2022 CHIPS (for Creating Helpful Incentives to Produce Semiconductors) and Science Act aims to ramp-up chip fabrication and innovation in the United States, but a new generation of workers will need the relevant knowledge to do so. This internship program was made possible, in part, by funding from CHIPS, which allocated $13.2 billion for R&amp;D and workforce development.</p>

    <p>Kara Stratton, a rising junior at Boston University, credits this internship with her decision to now pursue a nanotechnology concentration. Her summer project involved teasing out issues in the process of depositing platinum. In creating circuits, platinum (among other metals) is heated in a low-pressure chamber, vaporized, and deposited onto a wafer. She made changes to the platinum recipe and heat-up processes to reduce spitting, or uneven deposits.</p>

    <p>”Having a hands-on job that taught me so much and gave me priceless experiences made me feel more confident in making the decision to pursue a career in microelectronics,” Stratton says. “Lincoln Laboratory, and more specifically the ML, fosters an inclusive and innovative community of employees who were always willing to help me learn new things.”</p>

    <p>Some students will continue their work in the fall as student technical assistants. “We often see the eyes of students light up as they discuss new experiences and accomplishments — a great short-term reward.&nbsp;I think long term will work out, too,” Pulver says. One student, Ian Pahl from Western New England University, hopes to progress his research in reducing ML energy use. One of his projects studied the impact of reducing airflow fan speeds, a change that could save the ML up to $100,000 a year along with carbon footprint reduction, according to Pulver.</p>

    <p>Besides the time spent in the ML, each intern also gained mentorship, participated in training events, and learned more about the diverse projects undertaken at a federally funded R&amp;D center. They received subsidized housing and transportation, as part of the many benefits Lincoln Laboratory offers through its wider <a href=”https://www.ll.mit.edu/careers/student-opportunities/summer-research-program”>Summer Research Program</a>.</p>

    <p>As Donelon heads back to school to study optics and photonics, he looks forward to expanding on his newfound knowledge. “My internship experience was nothing short of incredible. Coming from a switched major and community college, I was not expecting but very humbled to be working at such a prestigious laboratory. The work I have done over the summer has been both challenging and gratifying.”</p>

    <p>Students interested in applying for the Massachusetts Microelectronics Internship Program can learn more on the program’s <a href=”https://www.ma-microelectronics.org/”>website</a>.</p>

  • Technologies for water conservation and treatment move closer to commercialization

    <p>The Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) provides Solutions Grants to help MIT researchers launch startup companies or products to commercialize breakthrough technologies in water and food systems. The Solutions Grant Program began in 2015 and is supported by <a href=”https://www.communityjameel.org/” target=”_blank”>Community Jameel</a>. In addition to one-year, renewable grants of up to $150,000, the program also matches grantees with industry mentors and facilitates introductions to potential investors. Since its inception, the J-WAFS Solutions Program has awarded over $3 million in funding to the MIT community. Numerous startups and products, including a portable desalination device and a company commercializing a novel food safety sensor, have spun out of this support.</p>

    <p>The 2023 J-WAFS Solutions Grantees are Professor C. Cem Tasan of the Department of Materials Science and Engineering and Professor Andrew Whittle of the Department of Civil and Environmental Engineering. Tasan’s project involves reducing water use in steel manufacturing and Whittle’s project tackles harmful algal blooms in water. Project work commences this September.</p>

    <p>“This year’s Solutions Grants are being award to professors Tasan and Whittle to help commercialize technologies they have been developing at MIT,” says J-WAFS executive director Renee J. Robins. “With J-WAFS’ support, we hope to see the teams move their technologies from the lab to the market, so they can have a beneficial impact on water use and water quality challenges,” Robins adds.</p>

    <p><strong>Reducing water consumption by solid-state steelmaking</strong></p>

    <p>Water is a major requirement for steel production. The steel industry ranks fourth in industrial freshwater consumption worldwide, since large amounts of water are needed mainly for cooling purposes in the process. Unfortunately, a strong correlation has also been shown to exist between freshwater use in steelmaking and water contamination. As the global demand for steel increases and freshwater availability decreases due to climate change, improved methods for more sustainable steel production are needed.</p>

    <p>A strategy to reduce the water footprint of steelmaking is to explore steel recycling processes that avoid liquid metal processing. With this motivation, Cem Tasan, the Thomas B. King Associate Professor of Metallurgy in the Department of Materials Science and Engineering, and postdoc Onur Guvenc PhD created a new process called <a href=”https://jwafs.mit.edu/projects/2023/solid-state-scrap-processing-pathway-drastically-reduce-water-consumption-steelmaking” target=”_blank”>scrap metal consolidation</a> (SMC). SMC is based on a well-established metal forming process known as roll bonding. Conventionally, roll bonding requires intensive prior surface treatment of the raw material, specific atmospheric conditions, and high deformation levels. Tasan and Guvenc’s research revealed that SMC can overcome these restrictions by enabling the solid-state bonding of scrap into a sheet metal form, even when the surface quality, atmospheric conditions, and deformation levels are suboptimal. Through lab-scale proof-of-principle investigations, they have already identified SMC process conditions and validated the mechanical formability of resulting steel sheets, focusing on mild steel, the most common sheet metal scrap.</p>

    <p>The J-WAFS Solutions Grant will help the team to build customer product prototypes, design the processing unit, and develop a scale-up strategy and business model. By simultaneously decreasing water usage, energy demand, contamination risk, and carbon dioxide burden, SMC has the potential to decrease the energy need for steel recycling by up to 86 percent, as well as reduce the linked carbon dioxide emissions and safeguard the freshwater resources that would otherwise be directed to industrial consumption.&nbsp;</p>

    <p><strong>Detecting harmful algal blooms in water before it’s too late</strong></p>

    <p>Harmful algal blooms (HABs) are a growing problem in both freshwater and saltwater environments worldwide, causing an estimated $13 billion in annual damage to drinking water, water for recreational use, commercial fishing areas, and desalination activities. HABs pose a threat to both human health and aquaculture, thereby threatening the food supply. Toxins in HABs are produced by some cyanobacteria, or blue-green algae, whose communities change in composition in response to eutrophication from agricultural runoff, sewer overflows, or other events. Mitigation of risks from HABs are most effective when there is advance warning of these changes in algal communities.&nbsp;</p>

    <p>Most in situ measurements of algae are based on fluorescence spectroscopy that is conducted with LED-induced fluorescence (LEDIF) devices, or probes that induce fluorescence of specific algal pigments using LED light sources. While LEDIFs provide reasonable estimates of concentrations of individual pigments, they lack resolution to discriminate algal classes within complex mixtures found in natural water bodies. In prior research, Andrew Whittle, the Edmund K. Turner Professor of Civil and Environmental Engineering, worked with colleagues to design <a href=”https://jwafs.mit.edu/projects/2023/novel-spectrofluorometer-online-early-detection-harmful-algal-blooms” target=”_blank”>REMORA</a>, a low-cost, field-deployable prototype spectrofluorometer for measuring induced fluorescence. This research was part of a collaboration between MIT and the AMS Institute. Whittle and the team successfully trained a machine learning model to discriminate and quantify cell concentrations for mixtures of different algal groups in water samples through an extensive laboratory calibration program using various algae cultures. The group demonstrated these capabilities in a series of field measurements at locations in Boston and Amsterdam.&nbsp;</p>

    <p>Whittle will work with Fábio Duarte of the Department of Urban Studies and Planning, the Senseable City Lab, and MIT’s Center for Real Estate to refine the design of REMORA. They will develop software for autonomous operation of the sensor that can be deployed remotely on mobile vessels or platforms to enable high-resolution spatiotemporal monitoring for harmful algae. Sensor commercialization will hopefully be able to exploit the unique capabilities of REMORA for long-term monitoring applications by water utilities, environmental regulatory agencies, and water-intensive industries.</p>

  • Gif Test

    Suspendisse non nisl sit amet velit hendrerit rutrum. Nam pretium turpis et arcu. Ut tincidunt tincidunt erat. Ut tincidunt tincidunt erat.

    Aenean leo ligula, porttitor eu, consequat vitae, eleifend ac, enim. Suspendisse nisl elit, rhoncus eget, elementum ac, condimentum eget, diam. Donec vitae sapien ut libero venenatis faucibus. Nam eget dui.

  • Invisible tagging system enhances 3D object tracking

    <p>Stop me if you’ve seen this before: a black and white pixelated square in lieu of a physical menu at a restaurant.</p>

    <p>QR codes are seemingly ubiquitous in everyday life. Whether you see one on a coupon at the grocery store, a flyer on a bulletin board, or the wall at a museum exhibit, each code contains embedded data.&nbsp;</p>

    <p>Unfortunately, QR codes in physical spaces are sometimes <a href=”https://www.fbi.gov/contact-us/field-offices/portland/news/press-releases/oregon-fbi-tech-tuesday-building-a-digital-defense-against-qr-code-scams”>replaced or tampered with</a> to trick you into giving away your data to unwanted parties — a seemingly harmless set of pixels could lead you to dangerous links and viruses. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed another potential option: BrightMarker, an invisible, fluorescent tag hidden in 3D-printed objects, such as a ball, container, gadget case, or gear. The researchers believe their system can enhance motion tracking, virtual reality, and object detection.</p>
    <p>To create a BrightMarker, users can download the CSAIL team’s software plugin for 3D modeling programs like Blender. After placing the tag within the geometry of their design, they can export it as an STL file for 3D printing. With fluorescent filaments inserted into the printer, users can fabricate an object with a hidden tag, much like an invisible QR code. Users will need to embed their markers into an object before it’s fabricated, meaning the tags cannot be added to existing items.</p>

    <p>The fluorescent materials enable each tag to emit light at a specific near-infrared wavelength, making them viewable with high contrast in infrared cameras. The researchers designed two attachable hardware setups capable of detecting BrightMarkers: one for smartphones and one for augmented reality (AR) and virtual reality (VR) headsets. Both have the capacity to view and scan the markers, which resemble glow-in-the-dark QR codes. Surrounding objects could be obscured from view using a longpass filter, another attachable piece that would only spot the fluorescence.</p>

    <p>BrightMarkers are imperceptible to the naked eye — and unobtrusive, meaning they don’t alter an object’s shape, appearance, or function. This makes them tamper-proof while seamlessly embedding metadata into the physical world. By adding a layer of connectivity between data and physical objects, users would have access to a more interactive experience with the world around them.</p>

    <p>“In today’s rapidly evolving world, where the lines between the real and digital environments continue to blur, there is an ever-increasing demand for robust solutions that seamlessly connect physical objects with their digital counterparts,” says MIT CSAIL and Department of Electrical Engineering and Computer Science PhD candidate Mustafa Doğa Doğan. “BrightMarkers serve as gateways to ‘ubiquitous metadata’ in the physical realm. This term refers to the concept of embedding metadata — descriptive information about the object’s identity, origin, function, and more — directly into physical items, akin to an invisible digital signature accompanying each product.”</p>

    <p><strong>BrightMarkers in action</strong></p>

    <p>Their system has shown promise in virtual reality settings. For example, a toy lightsaber with an embedded BrightMarker could be used as an in-game tool to slice through a virtual environment, using the tag-detecting hardware piece. This tool could enable other in-game objects for a more immersive VR experience.</p>

    <p>“In a future dominated by the AR and VR paradigm, object recognition, tracking, and traceability is crucial for connecting the physical and digital worlds: BrightMarker is just the beginning,” says MIT CSAIL visiting researcher Raúl García-Martín, who is doing his PhD at the University Carlos III of Madrid. “BrightMarker’s seamless tracking marks the start of this exciting journey into a tech-powered future.”</p>

    <p>As for motion tracking, BrightMarkers can be implemented into wearables that can precisely follow limb movements. For example, a user could wear a bracelet with an implanted BrightMarker, enabling a piece of detection hardware to digitize the user’s motion. If a game designer wanted to develop an authentic first-person experience, they could model their characters’ hands after the precise tracking each marker provides. The system can support users with impairments and different limb sizes, too, bridging the gap between digital and physical experiences for a wide user base.</p>

    <p>BrightMarkers could also be tracked across the supply chain. Manufacturers on-site could scan the tags at different locations to grab metadata about the product’s origin and movements. Likewise, consumers could check a product’s digital signature to verify ethical sourcing and recycling information, similar to the European Union’s proposed <a href=”https://www.wbcsd.org/Pathways/Products-and-Materials/Resources/The-EU-Digital-Product-Passport#:~:text=The%20DPP%20is%20a%20tool,%2C%20production%2C%20recycling%2C%20etc.”>Digital Product Passports</a>.&nbsp;</p>

    <p>Another potential application: night vision monitoring in home security cameras. If a user wanted to ensure their possessions were safe overnight, a camera could be equipped to watch the objects with hardware designed to trace and notify the owner about any movements. Unlike its off-the-shelf counterparts, this camera wouldn’t need to capture the user’s whole room, thus preserving their privacy.</p>

    <p><strong>Better than InfraredTags and AirTags</strong></p>

    <p>Doğan and his team’s work may sound familiar: they previously developed <a href=”https://news.mit.edu/2022/invisible-labels-identify-track-objects-0128″>InfraredTags</a>, a technology for embedding data on 3D-printed tags within physical objects, which was nominated for a People’s Choice Best Demo Award at the 2022 ACM CHI Conference on Human Factors in Computing Systems. While their previous project only worked for black objects, users have multiple color options with BrightMarker. With its fluorescent materials, the tags are configured to emit light at a specific wavelength, making them much easier to isolate and track than InfraredTags, which could only be detected at low contrast due to noise from other wavelengths in the captured environment.</p>

    <p>“The fluorescent filaments emit a light that can be robustly filtered using our imaging hardware,” says Doğan. “This overcomes the ‘blurriness’ often associated with traditional embedded unobtrusive markers, and allows for efficient real-time tracking even when objects are in motion.”</p>

    <p>In comparison to Apple’s AirTags, BrightMarkers are low-cost and low-energy. Depending on the application, though, one potential limitation is that the tags cannot be added to objects post hoc currently. Additionally, tracking each tag can be hindered if the user’s hand or another item in the room obstructs the camera’s view. As a remedy for potentially enhancing detection, the team recommends combining this technology with magnetic filaments so that the object’s magnetic field can also be tracked. The markers’ detection performance could also be improved by producing filaments with higher fluorochrome concentrations.</p>

    <p>“Fluorescent object tracking markers like BrightMarker show great promise in providing a potential real-world solution for product tracking and authentication,” says Andreea Danielescu, director of the Future Technologies R&amp;D group at Accenture Labs. “In addition to supply chain and retail applications, they could also be used to verify the authenticity of products, such as vegan handbags.”</p>

    <p>“Immersive technologies require powerful scene understanding capabilities,” says Google research scientist Mar Gonzalez-Franco, who was not involved in the work. “Having invisible markers embedded, like the ones from BrightMarker, can simplify the computer vision needs and help devices identify the objects that are interactable and bridge the gap for the users of AR and VR.”</p>

    <p>Doğan is optimistic about the system’s potential to enmesh metadata in our everyday lives. “BrightMarker holds tremendous promise in reshaping our real-life interactions with technology,” he notes. “As this technology continues to evolve, we can envision a world where BrightMarkers become seamlessly integrated into our everyday objects, facilitating effortless interactions between the physical and digital realms. From retail experiences where consumers can access detailed product information in stores to industrial settings, where BrightMarkers streamline supply chain tracking, the possibilities are vast.”</p>

    <p>Doğan and Garcia-Martin wrote the paper along with MIT CSAIL undergraduate students Patrick Haertel, Jamison O’Keefe, Ahmad Taka, and Akarsh Aurora. Raul Sanchez-Reillo, a professor at University Carlos III of Madrid, and Stefanie Mueller, a CSAIL affiliate and associate professor in the MIT departments of Electrical Engineering and Computer Science and Mechanical Engineering, are also authors. The researchers used fluorescent filaments provided by DIC Corp. They will present their findings at the Association for Computing Machinery’s 2023 User Interface Software and Technology Symposium (UIST).</p>