Blog

  • After Amazon, an ambition to accelerate American manufacturing

    <p>After more than two decades as part of Amazon’s core leadership team, Jeff Wilke helped transform the way people buy almost everything. His next act is no less ambitious: proving that America can make just about anything.</p>

    <p>In March 2021, Wilke stepped down from his post as CEO of Amazon’s Worldwide Consumer business — encompassing the company’s online marketplace, Amazon stores, Prime, 175 fulfillment centers, and Whole Foods — and soon stepped into a new role as chair of Re:Build Manufacturing.</p>

    <p>The venture’s name signals its larger mission: demonstrating that the United States can be a 21st-century manufacturing powerhouse.</p>

    <p>Re:Build was born in spring 2020, out of conversations between Wilke and his fellow MIT Leaders for Global Operations (LGO) classmate Miles Arnone SM ’93. By March of that year, the Covid pandemic was already exposing the economic and security vulnerabilities created by decades of offshoring manufacturing.</p>

    <p>“Within two months we had laid bare all of the brittleness and problems in U.S. supply chains,” Wilke says. “That was kind of the spark for me. Having 85 percent of our pharmaceutical ingredients not made here in the U.S. seems incredibly risky when you enter a pandemic.”</p>

    <p>Wilke soon discovered that he and Arnone — who had decades of experience leading machine tool companies and overseeing investments in manufacturing ventures at asset management firms — were on the same page, in more ways than one.</p>

    <p>“We realized we hadn’t lost the passion and drive to accomplish the same kinds of things,” he says. They shared a conviction that the future of the country’s economy — and its national security — depends on developing a robust manufacturing sector that creates durable, well-paying jobs while shoring up those vulnerable supply chains.</p>

    <p>Under the leadership of Arnone as CEO and Wilke as chair, Re:Build is off to a running start. In two years, the company has grown to nearly a thousand employees, spanning sites in 10 different states. It has acquired 11 businesses with varying flavors of engineering expertise across the aerospace, clean tech, health, and industrial sectors. Re:Build is developing a suite of design and engineering capabilities to support industrial customers who need solutions for “just-in-time manufacturing” for a range of products, from airplane wings to satellites to medical devices.</p>

    <p>“We have to rebuild an industrial base that will let us manufacture here the things that make sense to manufacture here,” says Wilke.</p>

    <p><strong>Homegrown motivation </strong></p>

    <p>While the pandemic revealed the urgency of restoring the manufacturing sector, the ideas behind Re:Build had been percolating for decades.</p>

    <p>Wilke grew up in Pittsburgh in the 1970s. He witnessed the steady decline of the city’s vaunted steel industry, and all of its societal knock-on effects. “I saw the impact of the mass loss of jobs on families and our community,” he recalls.</p>

    <p>The experience left a profound impression, one that lingered even as Wilke went off to study chemical engineering at Princeton University and then parlayed his passion for computer science — as a teenager, he would come home from school and happily write code in the basement for hours — into a software development position with Andersen Consulting (now Accenture).</p>

    <p>in 1991, Wilke decided to enter the MIT LGO program (at the time known as “Leaders for Manufacturing”), enticed by its unique curriculum — technically demanding but comprehensive in a way that seemed tailored for students with previous work experience. He wanted to help shape the next chapter in the world of manufacturing and operations. “That’s why I enrolled in LGO: I wanted to help build a company that created wealth and created jobs.”</p>

    <p>In addition to earning an MBA from the MIT Sloan School of Management and a master’s degree from the School of Engineering, LGO students engage in experiential, operations-focused coursework and complete a six-month research fellowship with one of LGO’s 20-plus partner companies, such as Amazon, Verizon, or Raytheon, and now Re:Build, which became the newest industry partner in December.</p>

    <p>Students will pursue internships in the areas of lean manufacturing, computer-aided manufacturing, and process development and optimization, gaining real-world exposure to Re:Build’s cutting-edge processes in everything from “lightweighting” — substituting composite materials for heavier metals, such as in wings for drones and airplanes — to supplying key components to manufacturers working in the electrification, hydrogen, energy storage, and fusion technology sectors.</p>

    <p>“We’re one of the top hirers for this current graduating class,” says Wilke. “In LGO alums, there is this rare combination of leadership, business judgment, and deep technical competence, which is incredibly precious.” By the time the LGO Class of 2023 hires join the company, there will be 15 program graduates employed there, and counting.</p>

    <p>“You’re talking about combining all the ‘soft’ leadership skills with all the rigor required to understand the mathematics of statistics, optimization, and machine learning,” says Wilke. “It’s very hard to teach and to learn all of the pieces necessary to be competent at this, which is why there aren’t many programs like LGO.”</p>

    <p>He emerged from his time at MIT in 1993 with tools that he would use again and again, as a vice president and general manager of pharmaceutical fine chemicals at AlliedSignal (now Honeywell), and later at Amazon. “I started to view the gift that LGO gave me as a playbook for how to hone operations,” Wilke says. “They work in any environment where people and technology are working side by side.”</p>

    <p><strong>A prime application of the LGO playbook</strong></p>

    <p>Wilke brought a manufacturing mindset to his transformative work at Amazon.</p>

    <p>He was hired in 1999 by Amazon founder Jeff Bezos to solve a wicked logistical puzzle: how to quickly process, fill, and ship the ever-growing number of unique, impossible-to-predict orders that came in via Amazon.com every day.</p>

    <p>A key insight helped Wilke unlock the solution. When he walked into one of the company’s fulfillment centers for the first time, Wilke didn’t see a retail warehouse but a factory.</p>

    <p>“I saw people and process and machines and technology and computer science,” he recalls. “Fulfillment centers, airports, hospitals, hotels, even Disneyland — these all are effectively complex operations that are manufacturing something, though not necessarily a physical product,” he says. “For a long time, Amazon didn’t manufacture a physical product, but it assembled orders for customers.”</p>

    <p>As Amazon’s vice president and general manager for operations, Wilke drew on his LGO playbook to solve a host of other challenges, including revamping the process for fulfilling customer orders.</p>

    <p>“At LGO, we spent a lot of time talking about the mathematics of variation, ways to characterize it and improve processes by understanding it,” he says. “It informed this idea that supply chain is a great place to apply the analytical tools of optimization and process control.”</p>

    <p>Wilke and his team redesigned the fulfillment centers’ layout, built new software and algorithms for stocking items and combining them efficiently in orders, and shrank the average time required to complete an order. By 2003, Wilke’s managers could get any item out the door in two-and-a-half hours. That enabled the company to make very precise guarantees to customers of when they would receive the item.</p>

    <p>Around the same time, another team at Amazon was developing a new subscription service and searching for a keystone offering around which to build it. “We decided to build that service around fast delivery,” Wilke says.</p>

    <p>Thus was born Amazon Prime, which now has well over 200 million subscribers around the world who pay for access to streaming music, movies, deals and discounts, and, of course, free two-day delivery. Today more than half of all U.S. online purchases are made via Amazon.</p>

    <p>At Amazon, Wilke was also instrumental in developing and codifying the company’s famous “leadership principles.”</p>

    <p>“Some were already in use, and were what attracted me to Amazon,” he says, “and some articulate a style of leadership that was heavily influenced by LGO ideas.”</p>

    <p>He points to “Dive Deep” as an example. “Understanding the entire business and process details, this idea that ‘leaders operate at all levels’ and ‘no task is beneath them’ — that’s totally LGO!”</p>

    <p><strong>Software and service</strong></p>

    <p>Wilke believes that the original mission of LGO — “to bring leadership and technology together to improve these operating-intensive businesses” — remains just as important now as it was when he attended.</p>

    <p>That’s one reason Wilke has stayed closely involved with the MIT LGO program, serving as a co-chair of the governing board for a decade. “It’s intellectually stimulating, and it feels like the program is pursuing a noble mission,” he says.</p>

    <p>“Jeff’s impact on the world and our daily lives is tremendous,” says LGO Executive Director Thomas Roemer. “He inspires everyone in the MIT LGO community with his example of applying our technical and leadership grounding in entirely new ways that transform the world. But I am even more impressed by his humility and his passion and dedication to the LGO program.”</p>

    <p>At the same time, he has been a strong advocate for ensuring that LGO’s curriculum keeps pace with the times.</p>

    <p>“We have to reinvent management science for a world where machines and humans work side by side,” he says. He credits the recent emergence of ChatGPT and other advances in artificial intelligence with awakening more educators and industry leaders to the imperative of changing the way they operate. “The trick to stay relevant, for LGO, is to stay on top of technology that changes how business is done.”</p>

    <p>Wilke walks this talk. Right after leaving Amazon in early 2021 — and before throwing himself into the task of revitalizing American manufacturing, he spent two weeks teaching himself how to code in Python.</p>

    <p>Wilke has since carved out time to bring that passion for marrying software and hardware and human insight to expand opportunities to other corners of academia and America. Through their family foundation, Wilke and his wife Liesl have committed to funding computer science professorships at each of the 35 tribal colleges and universities serving Indigenous students across the United States.</p>

    <p>Wilke, who serves on the board of Code.org, is a big believer in the productivity-expanding power of investing in software.</p>

    <p>With 25 in-house computer scientists, software is one of Re:Build’s core capabilities. When he talks to leaders at other firms, Wilke looks to see if there is a computer scientist in the C-suite. “You want someone sitting at that table who is still writing code, up on the most current architectures, who can advise executives as they make choices on process for products.”</p>

    <p><strong>Looking to the long term</strong></p>

    <p>At Re:Build, Wilke and Arnone have developed their own set of principles to guide their employees. Many are distilled from Wilke’s storied career — and similarly inflected by their LGO experience. He points to number 14: “We focus on and measure inputs we control and expect excellent performance on input metrics to create long-term value.”</p>

    <p>Wilke is determined to create a culture at Re:Build that’s focused on not on short-term financial engineering or quarterly earnings targets, but long-term value creation — for investors, for employees, and for society.</p>

    <p>Re:Build provides a range of services for manufacturing companies that assemble products as diverse and complex as airplanes, power plants, stents, or satellites. “Companies building these things need sophisticated partners that can co-engineering with them, design with them, build subcomponents, and maybe even do final assembly with them,” Wilke says.</p>

    <p>Their initial focus has been on acquiring existing companies; over time the company plans to develop its own manufacturing plants. In April, Re:Build announced that it would build its first one near Pittsburgh (New Kensington, Pennsylvania), not far from where Wilke grew up. “I didn’t put my hand on the scale!” he says.</p>

    <p>Building those plants is key to helping strong companies realize their potential — but it is also capital-intensive. Wilke points to the incentive structures of private equity funds — which want to see much quicker returns — as a key force in driving manufacturing offshore over the past several decades.</p>

    <p>“Building good companies takes time,” he says. If they succeed, the larger case for a broader renaissance in American manufacturing will make itself. “Money follows success. We don’t have to do much other than have people who invested in us originally do well.”</p>

    <p>“We are just getting started. And I don’t think we’ll be the only company doing this.”</p>

  • Researchers develop novel AI-based estimator for manufacturing medicine

    <p>When medical companies manufacture the pills and tablets that treat any number of illnesses, aches, and pains, they need to isolate the active pharmaceutical ingredient from a suspension and dry it. The process requires a human operator to monitor an industrial dryer, agitate the material, and watch for the compound to take on the right qualities for compressing into medicine. The job depends heavily on the operator’s observations.&nbsp;&nbsp;&nbsp;</p>

    <p>Methods for making that process less subjective and a lot more efficient are the subject of <a href=”https://www.nature.com/articles/s41467-023-36816-2″>a recent </a><a href=”https://www.nature.com/articles/s41467-023-36816-2″ target=”_blank”><em>Nature Communications</em></a> paper authored by researchers at MIT and Takeda. The paper’s authors devise a way to use physics and machine learning to categorize the rough surfaces that characterize particles in a mixture. The technique, which uses a physics-enhanced autocorrelation-based estimator (PEACE), could change pharmaceutical manufacturing processes for pills and powders, increasing efficiency and accuracy and resulting in fewer failed batches of pharmaceutical products.&nbsp;&nbsp;</p>

    <p>“Failed batches or failed steps in the pharmaceutical process are very serious,” says Allan Myerson, a professor of practice in the MIT Department of Chemical Engineering and one of the study’s authors. “Anything that improves the reliability of the pharmaceutical manufacturing, reduces time, and improves compliance is a big deal.”</p>

    <p>The team’s work is part of an ongoing collaboration between Takeda and MIT,<em> </em>launched in 2020. The MIT-Takeda Program aims to leverage the experience of both MIT and Takeda to solve problems at the intersection of medicine, artificial intelligence, and health care.</p>

    <p>In pharmaceutical manufacturing, determining whether a compound is adequately mixed and dried ordinarily requires stopping an industrial-sized dryer and taking samples off the manufacturing line for testing. Researchers at Takeda thought artificial intelligence could improve the task and reduce stoppages that slow down production. Originally the research team planned to use videos to train a computer model to replace a human operator. But determining which videos to use to train the model still proved too subjective. Instead, the MIT-Takeda team decided to illuminate particles with a laser during filtration and drying, and measure particle size distribution using physics and machine learning.&nbsp;</p>

    <p>“We just shine a laser beam on top of this drying surface and observe,” says Qihang Zhang, a doctoral student in MIT’s Department of Electrical Engineering and Computer Science and the study’s first author.&nbsp;</p>
    <p>A physics-derived equation describes the interaction between the laser and the mixture, while machine learning characterizes the particle sizes. The process doesn’t require stopping and starting the process, which means the entire job is more secure and more efficient than standard operating procedure, according to George Barbastathis, professor of mechanical engineering at MIT and corresponding author of the study.</p>

    <p>The machine learning algorithm also does not require many datasets to learn its job, because the physics allows for speedy training of the neural network.</p>

    <p>“We utilize the physics to compensate for the lack of training data, so that we can train the neural network in an efficient way,” says Zhang. “Only a tiny amount of experimental data is enough to get a good result.”</p>

    <p>Today, the only inline processes used for particle measurements in the pharmaceutical industry are for slurry products, where crystals float in a liquid. There is no method for measuring particles within a powder during mixing. Powders can be made from slurries, but when a liquid is filtered and dried its composition changes, requiring new measurements. In addition to making the process quicker and more efficient, using the PEACE mechanism makes the job safer because it requires less handling of potentially highly potent materials, the authors say.&nbsp;</p>

    <p>The ramifications for pharmaceutical manufacturing could be significant, allowing drug production to be more efficient, sustainable, and cost-effective, by reducing the number of experiments companies need to conduct when making products. Monitoring the characteristics of a drying mixture is an issue the industry has long struggled with, according to Charles Papageorgiou, the director of Takeda’s Process Chemistry Development group and one of the study’s authors.&nbsp;</p>

    <p>“It is a problem that a lot of people are trying to solve, and there isn’t a good sensor out there,” says Papageorgiou. “This is a pretty big step change, I think, with respect to being able to monitor, in real time, particle size distribution.”</p>

    <p>Papageorgiou said that the mechanism could have applications in other industrial pharmaceutical operations. At some point, the laser technology may be able to train video imaging, allowing manufacturers to use a camera for analysis rather than laser measurements. The company is now working to assess the tool on different compounds in its lab.&nbsp;</p>

    <p>The results come directly from collaboration between Takeda and three MIT departments: Mechanical Engineering, Chemical Engineering, and Electrical Engineering and Computer Science. Over the last three years, researchers at MIT and Takeda have worked together on 19 projects focused on applying machine learning and artificial intelligence to problems in the health-care and medical industry as part of the MIT-Takeda Program.&nbsp;</p>

    <p>Often, it can take years for academic research to translate to industrial processes. But researchers are hopeful that direct collaboration could shorten that timeline. Takeda is a walking distance away from MIT’s campus, which allowed researchers to set up tests in the company’s lab, and real-time feedback from Takeda helped MIT researchers structure their research based on the company’s equipment and operations.&nbsp;</p>

    <p>Combining the expertise and mission of both entities helps researchers ensure their experimental results will have real-world implications. The team has already filed for two patents and has plans to file for a third. &nbsp;</p>

  • Open-source platform simulates wildlife for soft robotics designers

    <p>Since the term “soft robotics” was <a href=”https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5995266/”>adopted</a> in 2008, engineers in the field have been building diverse representations of flexible machines useful in exploration, locomotion, rehabilitation, and even space. One source of inspiration: the way animals move in the wild.<br />
    <br />
    A team of MIT researchers has taken this a step further, developing <a href=”https://openreview.net/forum?id=Xyme9p1rpZw” target=”_blank”>SoftZoo</a>, a bio-inspired platform that enables engineers to study soft robot co-design. The framework optimizes algorithms that consist of design, which determines what the robot will look like; and control, or the system that enables robotic motion, improving how users automatically generate outlines for potential machines.<br />
    <br />
    Taking a walk on the wild side, the platform features 3-D models of animals such as panda bears, fishes, sharks, and caterpillars as designs that can simulate soft robotics tasks like locomotion, agile turning, and path following in different environments. Whether by snow, desert, clay, or water, the platform demonstrates the performance trade-offs of various designs in different terrains.</p>

    <p>“Our framework can help users find the best configuration for a robot’s shape, allowing them to design soft robotics algorithms that can do many different things,” says MIT PhD student Tsun-Hsuan Wang, an affiliate of the Computer Science and Artificial Intelligence Laboratory (CSAIL) who is a lead researcher on the project. “In essence, it helps us understand the best strategies for robots to interact with their environments.”</p>

    <p>SoftZoo is more comprehensive than similar platforms, which already simulate design and control, because it models movement that reacts to the physical features of various biomes. The framework’s versatility comes from a differentiable multiphysics engine, which allows for the simulation of several aspects of a physical system at the same time, such as a baby seal turning on ice or a caterpillar inching across a wetland environment. The engine’s differentiability optimizes co-design by reducing the number of the often expensive simulations required to solve computational control and design problems. As a result, users can design and move soft robots with more sophisticated, specified algorithms.</p>

    <p>The system’s ability to simulate interactions with different terrain illustrates the importance of morphology, a branch of biology that studies the shapes, sizes, and forms of different organisms. Depending on the environment, some biological structures are more optimal than others, much like comparing blueprints for machines that complete similar tasks.&nbsp;</p>

    <p>These biological outlines can inspire more specialized, terrain-specific artificial life. “A jellyfish’s gently undulating geometry allows it to efficiently travel across large bodies of water, inspiring researchers to develop new breeds of soft robots and opening up unlimited possibilities of what artificial creatures cultivated entirely in silico can be capable of,” says Wang. “Additionally, dragonflies can perform very agile maneuvers that other flying creatures cannot complete because they have special structures on their wings that change their center of mass when they fly. Our platform optimizes locomotion the same way a dragonfly is naturally more adept at working through its surroundings.”</p>

    <p>Robots previously struggled to navigate through cluttered environments because their bodies were not compliant with their surroundings. With SoftZoo, though, designers could develop the robot’s brain and body simultaneously, co-optimizing both terrestrial and aquatic machines to be more aware and specialized. With increased behavioral and morphological intelligence, the robots would then be more useful in completing rescue missions and conducting exploration. If a person went missing during a flood, for example, the robot could potentially traverse the waters more efficiently because it was optimized using methods demonstrated in the SotftZoo platform.</p>

    <p>“SoftZoo provides open-source simulation for soft robot designers, helping them build real-world robots much more easily and flexibly while accelerating the machines’ locomotion capabilities in diverse environments,” adds study co-author Chuang Gan, a research scientist at the MIT-IBM Watson AI Lab who will soon be an assistant professor at the University of Massachusetts at Amherst.</p>

    <p>“This computational approach to co-designing the soft robot bodies and their brains (that is, their controllers) opens the door to rapidly creating customized machines that are designed for a specific task,” adds Daniela Rus, director of CSAIL and the Andrew and Erna Viterbi Professor in the MIT Department of Electrical Engineering and Computer Science (EECS), who is another author of the work.</p>

    <p>Before any type of robot is constructed, the framework could be a substitute for field testing unnatural scenes. For example, assessing how a bear-like robot behaves in a desert may be challenging for a research team working in the urban plains of Boston. Instead, soft robotics engineers could use 3-D models in SoftZoo to simulate different designs and evaluate how effective the algorithms controlling their robots are at navigation. In turn, this would save researchers time and resources.</p>

    <p>Still, the limitations of current fabrication techniques stand in the way of bringing these soft robot designs to life. “Transferring from simulation to physical robot remains unsolved and requires further study,” says Wang. “The muscle models, spatially varying stiffness, and sensorization in SoftZoo cannot be straightforwardly realized with current fabrication techniques, so we are working on these challenges.”</p>

    <p>In the future, the platform’s designers are eyeing applications in human mechanics, such as manipulation, given its ability to test robotic control. To demonstrate this potential, Wang’s team designed a 3-D arm <a href=”https://sites.google.com/view/softzoo-iclr-2023/simple-extension-to-manipulation”>throwing</a> a snowball forward. By including the simulation of more human-like tasks, soft robotics designers could then use the platform to assess soft robotic arms that grasp, move, and stack objects.</p>

    <p>Wang, Gan, and Rus wrote a <a href=”https://openreview.net/forum?id=Xyme9p1rpZw” target=”_blank”>paper</a> on the work alongside EECS PhD student and CSAIL affiliate Pingchuan Ma, Harvard University postdoc Andrew Spielberg PhD ’21, Carnegie Mellon University PhD student Zhou Xian, UMass Amherst Associate Professor Hao Zhang, and MIT professor of brain and cognitive sciences and CSAIL affiliate Joshua B. Tenenbaum.</p>

    <p>Wang completed this work during an internship at the MIT-IBM Watson AI Lab, with the NSF EFRI Program, DARPA MCS Program, MIT-IBM Watson AI Lab, and gift funding from MERL, Cisco, and Amazon all providing support for the project. The team’s research will be presented at the 2023 International Conference on Learning Representations this month.</p>

  • Moving perovskite advancements from the lab to the manufacturing floor

    <p><em>The following was issued as a joint announcement from MIT.nano and the MIT Research Laboratory for Electronics; CubicPV; Verde Technologies; Princeton University; and the University of California at San Diego.</em></p>

    <p>Tandem solar cells are made of stacked materials — such as silicon paired with perovskites — that together absorb more of the solar spectrum than single materials, resulting in a dramatic increase in efficiency. Their potential to generate significantly more power than conventional cells could make a meaningful difference in the race to combat climate change and the transition to a clean-energy future.</p>

    <p>However, current methods to create stable and efficient perovskite layers require time-consuming, painstaking rounds of design iteration and testing, inhibiting their development for commercial use. Today, the U.S. Department of Energy Solar Energy Technologies Office (SETO) announced that MIT has been selected to receive an $11.25 million cost-shared award to establish a new research center to address this challenge by using a co-optimization framework guided by machine learning and automation.</p>

    <p>A collaborative effort with lead industry participant CubicPV, solar startup Verde Technologies, and academic partners Princeton University and the University of California San Diego (UC San Diego), the center will bring together teams of researchers to support the creation of perovskite-silicon tandem solar modules that are co-designed for both stability and performance, with goals to significantly accelerate R&amp;D and the transfer of these achievements into commercial environments.</p>

    <p>“Urgent challenges demand rapid action. This center will accelerate the development of tandem solar modules by bringing academia and industry into closer partnership,” says MIT professor of mechanical engineering Tonio Buonassisi, who will direct the center. “We’re grateful to the Department of Energy for supporting this powerful new model and excited to get to work.”</p>

    <p>Adam Lorenz, CTO of solar energy technology company CubicPV, stresses the importance of thinking about scale, alongside quality and efficiency, to accelerate the perovskite effort into the commercial environment. “Instead of chasing record efficiencies with tiny pixel-sized devices and later attempting to stabilize them, we will simultaneously target stability, reproducibility, and efficiency,” he says. “It’s a module-centric approach that creates a direct channel for R&amp;D advancements into industry.”</p>

    <p>The center will be named Accelerated Co-Design of Durable, Reproducible, and Efficient Perovskite Tandems, or ADDEPT. The grant will be administered through the MIT Research Laboratory for Electronics (RLE).</p>

    <p>David Fenning, associate professor of nanoengineering at UC San Diego, has worked with Buonassisi on the idea of merging materials, automation, and computation, specifically in this field of artificial intelligence and solar, since 2014. Now, a central thrust of the ADDEPT project will be to deploy machine learning and robotic screening to optimize processing of perovskite-based solar materials for efficiency and durability.</p>

    <p>“We have already seen early indications of successful technology transfer between our UC San Diego robot PASCAL and industry,” says Fenning. “With this new center, we will bring research labs and the emerging perovskite industry together to improve reproducibility and reduce time to market.”</p>

    <p>“Our generation has an obligation to work collaboratively in the fight against climate change,” says Skylar Bagdon, CEO of Verde Technologies, which received the American-Made Perovskite Startup Prize. “Throughout the course of this center, Verde will do everything in our power to help this brilliant team transition lab-scale breakthroughs into the world where they can have an impact.”</p>

    <p>Several of the academic partners echoed the importance of the joint effort between academia and industry. Barry Rand, professor of electrical and computer engineering at the Andlinger Center for Energy and the Environment at Princeton University, pointed to the intersection of scientific knowledge and market awareness. “Understanding how chemistry affects films and interfaces will empower us to co-design for stability and performance,” he says. “The center will accelerate this use-inspired science, with close guidance from our end customers, the industry partners.”</p>

    <p>A critical resource for the center will be MIT.nano, a 200,000-square-foot research facility set in the heart of the campus. MIT.nano Director Vladimir Bulović, the Fariborz Maseeh (1990) Professor of Emerging Technology, says he envisions MIT.nano as a hub for industry and academic partners, facilitating technology development and transfer through shared lab space, open-access equipment, and streamlined intellectual property frameworks.</p>

    <p>“MIT has a history of groundbreaking innovation using perovskite materials for solar applications,” says Bulović. “We’re thrilled to help build on that history by anchoring ADDEPT at MIT.nano and working to help the nation advance the future of these promising materials.”</p>

    <p>MIT was selected as a part of the SETO Fiscal Year 2022 Photovoltaics (PV) funding program, an effort to reduce costs and supply chain vulnerabilities, further develop durable and recyclable solar technologies, and advance perovskite PV technologies toward commercialization. ADDEPT is one project that will tackle perovskite durability, which will extend module life. The overarching goal of these projects is to lower the levelized cost of electricity generated by PV.</p>

    <p>Research groups involved with the ADDEPT project at MIT include Buonassisi’s Accelerated Materials Laboratory for Sustainability (AMLS), Bulović’s Organic and Nanostructured Electronics (ONE) Lab, and the Bawendi Group led by Lester Wolfe Professor in Chemistry Moungi Bawendi. Also working on the project is Jeremiah Mwaura, research scientist in the ONE Lab.</p>

  • 3 Questions: Yossi Sheffi on AI and the future of the supply chain

    <p><em>Global supply chains are immense feats of technological and organizational sophistication. They are also, as the onset of the Covid-19 pandemic showed, vulnerable to unexpected developments. Will that change as artificial intelligence becomes a bigger part of supply chains? And what will happen to workers in the process? </em></p>

    <p></p>

    <p><em>MIT Professor Yossi Sheffi explores these topics in a new book, “The Magic Conveyor Belt: AI, Supply Chains, and the Future of Work,” published by MIT’s CTL Media. Sheffi, the Elisha Gray II Professor of Engineering Systems at MIT, is also the director of MIT’s Center for Transportation and Logistics, which just marked its 50th anniversary. He talked with </em>MIT News<em> about the new book.</em></p>

    <p></p>

    <p><strong>Q:</strong> Why did you write this book?</p>

    <p></p>

    <p><strong>A:</strong> After the pandemic started, suddenly supply chains became hot. For the 50th anniversary of the Center for Transportation and Logistics in March, we thought about writing a paper, which became this book. In the first part of the book, I just explain how complex supply chains are, and how amazing they are. You should never be upset when something is not available in a supermarket or on Amazon; you should be amazed that something is there, once you understand what it takes to get it there. Supply chains underline not only people’s standard of living by ensuring the availability of medicines and everyday items, but they are crucial to responding to modern challenges such as resilience and sustainability. The book then examines the technology underlying supply chain opertions and business in general, especially AI, leading to an exploration of future of work. These technologies are moving so fast it’s hard to know what will happen, of course.</p>

    <p></p>

    <p><strong>Q:</strong> You can’t predict what impact AI will have, but how do you think about it, and discuss it in the book?</p>

    <p></p>

    <p><strong>A:</strong> I looked at all the industrial revolutions; the fear of losing a job has always been prevalent. In 1589, William Lee asked the Queen of England for a patent for his stocking-making device. The queen shut it down, fearing job losses in the industry. When looms were automated in the 19th century, or when Ford started the production line for the Model T, this fear led to violence.</p>

    <p></p>

    <p>But with every technological change more jobs were created than lost. Every time, people said, “But now it’s different.” Even with AI, there’s a good chance more jobs will be created than lost. When ATMs came about, people thought there would be no more bank tellers. But the number of bank tellers in the U.S. has doubled. Why? Because opening a branch became a lot cheaper. When Ford made cars by hand, they had only a few hundred employees. With the Model T, there were 157,000, but this is not even the big story. When people could afford cars, people started driving everywhere, and motels and restaurants came up all around the U.S., millions of jobs were created. So you have growth in a profession itself and related areas.</p>

    <p></p>

    <p>There is little doubt that modern AI can increase productivity and unleash a new era of economic growth if it’s used for good. But I’d like to say one thing about why it may actually be somewhat different this time: the speed of change. Because unlike electricity or the steam engine, you don’t have to build huge plants. It’s software which, once developed, moves at the speed of light. Governments may have to prepare more for retraining and putting people in trade school faster. As AI becomes more sophisticated, it will develop a larger range of possibilities.</p>

    <p></p>

    <p><strong>Q:</strong> Taking those insights, how might we see this being applied to to supply chains?</p>

    <p></p>

    <p><strong>A:</strong> Supply chains are automating fast. Warehouses are full of robots. It’s the number one robotic application in China and many other places. A profession that used to be about driving trucks and moving boxes, as well as a male-dominant profession, is now increasingly a technical occupation, and we see a lot more women on the job.</p>

    <p></p>

    <p>But as of 2015, truck driving was still the number one profession in 29 U.S. states. Autonomous trucks are not going to drive into cities. To go there they would have to cross over white lines on roads, go over the sidewalks, and so on, which they are not programmed to do. Instead, the model for autonomous trucks is now what’s called exit-to-exit, where there would be transfer stations near highways and outside cities. A truck goes from the plant to the highway exit, then to the transfer facility to unload its goods. This is likely to create a lot of new jobs within the first mile and the last mile of an autonomous truck trip, and a lot of jobs at these stations, including retail, maintenance, and audit/check services. It may be hard to imagine, but I can see more jobs being created. I’m optimistic, but that’s my nature.</p>

    <p></p>

    <p>The reason I like to work in this area is that it’s a combination of things — technology and processes — but in the end, supply chains are human networks. Ultimately supply chain are made of people who make, store, move, contract, communicate — all augmented by increasingly powerful technologies. And technology is an augmenting force for many of the uniquely human qualities, not a replacement force.</p>

  • Flow batteries for grid-scale energy storage

    <p>In the coming decades, renewable<strong> </strong>energy sources such as solar and wind will increasingly dominate the conventional power grid. Because those sources only generate electricity when it’s sunny or windy, ensuring a reliable grid — one that can deliver power 24/7 — requires some means of storing electricity when supplies are abundant and delivering it later when they’re not. And because there can be hours and even days with no wind, for example, some energy storage devices must be able to store a large amount of electricity for a long time.</p>

    <p>A promising technology for performing that task is the flow battery, an electrochemical device that can store hundreds of megawatt-hours of energy — enough to keep thousands of homes running for many hours on a single charge. Flow batteries have the potential for long lifetimes and low costs in part due to their unusual design. In the everyday batteries used in phones and electric vehicles, the materials that store the electric charge are solid coatings on the electrodes. “A flow battery takes those solid-state charge-storage materials, dissolves them in electrolyte solutions, and then pumps the solutions through the electrodes,” says Fikile Brushett, an associate professor of chemical engineering at MIT. That design offers many benefits and poses a few challenges.</p>

    <p><strong>Flow batteries: Design and operation</strong></p>

    <p>A flow battery contains two substances that undergo electrochemical reactions in which electrons are transferred from one to the other. When the battery is being charged, the transfer of electrons forces the two substances into a state that’s “less energetically favorable” as it stores extra energy. (Think of a ball being pushed up to the top of a hill.) When the battery is being discharged, the transfer of electrons shifts the substances into a more energetically favorable state as the stored energy is released. (The ball is set free and allowed to roll down the hill.)</p>

    <p>At the core of a flow battery are two large tanks that hold liquid electrolytes, one positive and the other negative. Each electrolyte contains dissolved “active species” — atoms or molecules that will electrochemically react to release or store electrons. During charging, one species is “oxidized” (releases electrons), and the other is “reduced” (gains electrons); during discharging, they swap roles. Pumps are used to circulate the two electrolytes through separate electrodes, each made of a porous material that provides abundant surfaces on which the active species can react. A thin membrane between the adjacent electrodes keeps the two electrolytes from coming into direct contact and possibly reacting, which would release heat and waste energy that could otherwise be used on the grid.</p>

    <p>When the battery is being discharged, active species on the negative side oxidize, releasing electrons that flow through an external circuit to the positive side, causing the species there to be reduced. The flow of those electrons through the external circuit can power the grid. In addition to the movement of the electrons, “supporting” ions — other charged species in the electrolyte — pass through the membrane to help complete the reaction and keep the system electrically neutral.</p>

    <p>Once all the species have reacted and the battery is fully discharged, the system can be recharged. In that process, electricity from wind turbines, solar farms, and other generating sources drives the reverse reactions. The active species on the positive side oxidize to release electrons back through the wires to the negative side, where they rejoin their original active species. The battery is now reset and ready to send out more electricity when it’s needed. Brushett adds, “The battery can be cycled in this way over and over again for years on end.”</p>

    <p><strong>Benefits and challenges</strong></p>

    <p>A major advantage of this system design is that where the energy is stored (the tanks) is separated from where the electrochemical reactions occur (the so-called reactor, which includes the porous electrodes and membrane). As a result, the capacity of the battery — how much energy it can store — and its power — the rate at which it can be charged and discharged — can be adjusted separately. “If I want to have more capacity, I can just make the tanks bigger,” explains Kara Rodby PhD ’22, a former member of Brushett’s lab and now a technical analyst at Volta Energy Technologies. “And if I want to increase its power, I can increase the size of the reactor.” That flexibility makes it possible to design a flow battery to suit a particular application and to modify it if needs change in the future.</p>

    <p>However, the electrolyte in a flow battery can degrade with time and use. While all batteries experience electrolyte degradation, flow batteries in particular suffer from a relatively faster form of degradation called “crossover.” The membrane is designed to allow small supporting ions to pass through and block the larger active species, but in reality, it isn’t perfectly selective. Some of the active species in one tank can sneak through (or “cross over”) and mix with the electrolyte in the other tank. The two active species may then chemically react, effectively discharging the battery. Even if they don’t, some of the active species is no longer in the first tank where it belongs, so the overall capacity of the battery is lower.</p>

    <p>Recovering capacity lost to crossover requires some sort of remediation — for example, replacing the electrolyte in one or both tanks or finding a way to reestablish the “oxidation states” of the active species in the two tanks. (Oxidation state is a number assigned to an atom or compound to tell if it has more or fewer electrons than it has when it’s in its neutral state.) Such remediation is more easily — and therefore more cost-effectively — executed in a flow battery because all the components are more easily accessed than they are in a conventional battery.</p>

    <p><strong>The state of the art: Vanadium</strong></p>

    <p>A critical factor in designing flow batteries is the selected chemistry. The two electrolytes can contain different chemicals, but today the most widely used setup has vanadium in different oxidation states on the two sides. That arrangement addresses the two major challenges with flow batteries.</p>

    <p>First, vanadium doesn’t degrade. “If you put 100 grams of vanadium into your battery and you come back in 100 years, you should be able to recover 100 grams of that vanadium — as long as the battery doesn’t have some sort of a physical leak,” says Brushett.</p>

    <p>And second, if some of the vanadium in one tank flows through the membrane to the other side, there is no permanent cross-contamination of the electrolytes, only a shift in the oxidation states, which is easily remediated by re-balancing the electrolyte volumes and restoring the oxidation state via a minor charge step. Most of today’s commercial systems include a pipe connecting the two vanadium tanks that automatically transfers a certain amount of electrolyte from one tank to the other when the two get out of balance.</p>

    <p>However, as the grid becomes increasingly dominated by renewables, more and more flow batteries will be needed to provide long-duration storage. Demand for vanadium will grow, and that will be a problem. “Vanadium is found around the world but in dilute amounts, and extracting it is difficult,” says Rodby. “So there are limited places — mostly in Russia, China, and South Africa — where it’s produced, and the supply chain isn’t reliable.” As a result, vanadium prices are both high and extremely volatile — an impediment to the broad deployment of the vanadium flow battery.</p>

    <p><strong>Beyond vanadium</strong></p>

    <p>The question then becomes: If not vanadium, then what? Researchers worldwide are trying to answer that question, and many are focusing on promising chemistries using materials that are more abundant and less expensive than vanadium. But it’s not that easy, notes Rodby. While other chemistries may offer lower initial capital costs, they may be more expensive to operate over time. They may require periodic servicing to rejuvenate one or both of their electrolytes. “You may even need to replace them, so you’re essentially incurring that initial (low) capital cost again and again,” says Rodby.</p>

    <p>Indeed, comparing the economics of different options is difficult because “there are so many dependent variables,” says Brushett. “A flow battery is an electrochemical system, which means that there are multiple components working together in order for the device to function. Because of that, if you are trying to improve a system — performance, cost, whatever — it’s very difficult because when you touch one thing, five other things change.”</p>

    <p>So how can we compare these new and emerging chemistries — in a meaningful way — with today’s vanadium systems? And how do we compare them with one another, so we know which ones are more promising and what the potential pitfalls are with each one? “Addressing those questions can help us decide where to focus our research and where to invest our research and development dollars now,” says Brushett.</p>

    <p><strong>Techno-economic modeling as a guide</strong></p>

    <p>A good way to understand and assess the economic viability of new and emerging energy technologies is using techno-economic modeling. With certain models, one can account for the capital cost of a defined system and — based on the system’s projected performance — the operating costs over time, generating a total cost discounted over the system’s lifetime. That result allows a potential purchaser to compare options on a “levelized cost of storage” basis.</p>

    <p>Using that approach, Rodby <a href=”http://doi.org/10.1016/j.jpowsour.2021.230085″>developed a framework</a> for estimating the levelized cost for flow batteries. The framework includes a dynamic physical model of the battery that tracks its performance over time, including any changes in storage capacity. The calculated operating costs therefore cover all services required over decades of operation, including the remediation steps taken in response to species degradation and crossover.</p>

    <p>Analyzing all possible chemistries would be impossible, so the researchers focused on certain classes. First, they narrowed the options down to those in which the active species are dissolved in water. “Aqueous systems are furthest along and are most likely to be successful commercially,” says Rodby. Next, they limited their analyses to “asymmetric” chemistries; that is, setups that use different materials in the two tanks. (As Brushett explains, vanadium is unusual in that using the same “parent” material in both tanks is rarely feasible.) Finally, they divided the possibilities into two classes: species that have a finite lifetime and species that have an infinite lifetime; that is, ones that degrade over time and ones that don’t.</p>

    <p>Results from their analyses aren’t clear-cut; there isn’t a particular chemistry that leads the pack. But they do provide general guidelines for choosing and pursuing the different options.</p>

    <p><strong>Finite-lifetime materials</strong></p>

    <p>While vanadium is a single element, the finite-lifetime materials are typically organic molecules made up of multiple elements, among them carbon. One advantage of organic molecules is that they can be synthesized in a lab and at an industrial scale, and the structure can be altered to suit a specific function. For example, the molecule can be made more soluble, so more will be present in the electrolyte and the energy density of the system will be greater; or it can be made bigger so it won’t fit through the membrane and cross to the other side. Finally, organic molecules can be made from simple, abundant, low-cost elements, potentially even waste streams from other industries.</p>

    <p>Despite those attractive features, there are two concerns. First, organic molecules would probably need to be made in a chemical plant, and upgrading the low-cost precursors as needed may prove to be more expensive than desired. Second, these molecules are large chemical structures that aren’t always very stable, so they’re prone to degradation. “So along with crossover, you now have a new degradation mechanism that occurs over time,” says Rodby. “Moreover, you may figure out the degradation process and how to reverse it in one type of organic molecule, but the process may be totally different in the next molecule you work on, making the discovery and development of each new chemistry require significant effort.”</p>

    <p>Research is ongoing, but at present, Rodby and Brushett find it challenging to make the case for the finite-lifetime chemistries, mostly based on their capital costs. Citing studies that have estimated the manufacturing costs of these materials, Rodby believes that current options cannot be made at low enough costs to be economically viable. “They’re cheaper than vanadium, but not cheap enough,” says Rodby.</p>

    <p>The results send an important message to researchers designing new chemistries using organic molecules: Be sure to consider operating challenges early on. Rodby and Brushett note that it’s often not until way down the “innovation pipeline” that researchers start to address practical questions concerning the long-term operation of a promising-looking system. The MIT team recommends that understanding the potential decay mechanisms and how they might be cost-effectively reversed or remediated should be an upfront design criterion.</p>

    <p><strong>Infinite-lifetime species</strong></p>

    <p>The infinite-lifetime species include materials that — like vanadium — are not going to decay. The most likely candidates are other metals; for example, iron or manganese. “These are commodity-scale chemicals that will certainly be low cost,” says Rodby.</p>

    <p>Here, the researchers found that there’s a wider “design space” of feasible options that could compete with vanadium. But there are still challenges to be addressed. While these species don’t degrade, they may trigger side reactions when used in a battery. For example, many metals catalyze the formation of hydrogen, which reduces efficiency and adds another form of capacity loss. While there are ways to deal with the hydrogen-evolution problem, a sufficiently low-cost and effective solution for high rates of this side reaction is still needed.</p>

    <p>In addition, crossover is a still a problem requiring remediation steps. The researchers evaluated two methods of dealing with crossover in systems combining two types of infinite-lifetime species.</p>

    <p>The first is the “spectator strategy.” Here, both of the tanks contain both active species. Explains Brushett, “You have the same electrolyte mixture on both sides of the battery, but only one of the species is ever working and the other is a spectator.” As a result, crossover can be remediated in similar ways to those used in the vanadium flow battery. The drawback is that half of the active material in each tank is unavailable for storing charge, so it’s wasted. “You’ve essentially doubled your electrolyte cost on a per-unit energy basis,” says Rodby.</p>

    <p>The second method calls for making a membrane that is perfectly selective: It must let through only the supporting ion needed to maintain the electrical balance between the two sides. However, that approach increases cell resistance, hurting system efficiency. In addition, the membrane would need to be made of a special material — say, a ceramic composite — that would be extremely expensive based on current production methods and scales. Rodby notes that work on such membranes is under way, but the cost and performance metrics are “far off from where they’d need to be to make sense.”</p>

    <p><strong>Time is of the essence</strong></p>

    <p>The researchers stress the urgency of the climate change threat and the need to have grid-scale, long-duration storage systems at the ready. “There are many chemistries now being looked at,” says Rodby, “but we need to hone in on some solutions that will actually be able to compete with vanadium and can be deployed soon and operated over the long term.”</p>

    <p>The techno-economic framework is intended to help guide that process. It can calculate the levelized cost of storage for specific designs for comparison with vanadium systems and with one another. It can identify critical gaps in knowledge related to long-term operation or remediation, thereby identifying technology development or experimental investigations that should be prioritized. And it can help determine whether the trade-off between lower upfront costs and greater operating costs makes sense in these next-generation chemistries.</p>

    <p>The good news, notes Rodby, is that advances achieved in research on one type of flow battery chemistry can often be applied to others. “A lot of the principles learned with vanadium can be translated to other systems,” she says. She believes that the field has advanced not only in understanding but also in the ability to design experiments that address problems common to all flow batteries, thereby helping to prepare the technology for its important role of grid-scale storage in the future.</p>

    <p>This research was supported by the MIT Energy Initiative. Kara Rodby PhD ’22 was supported by an ExxonMobil-MIT Energy Fellowship in 2021-22.</p>

    <p><em>This article appears in the&nbsp;</em><a href=”https://energy.mit.edu/energy-futures/winter-2023/”><em>Winter 2023 issue of&nbsp;</em></a><a href=”https://energy.mit.edu/energy-futures/winter-2023/”>Energy Futures</a><em>,&nbsp;the magazine of the MIT Energy Initiative.</em></p>

  • A four-legged robotic system for playing soccer on various terrains

    <p>If you’ve ever played soccer with a robot, it’s a familiar feeling. Sun glistens down on your face as the smell of grass permeates the air. You look around. A four-legged robot is hustling toward you, dribbling with determination.&nbsp;</p>

    <p>While the bot doesn’t display a Lionel Messi-like level of ability, it’s an impressive in-the-wild dribbling system nonetheless. Researchers from MIT’s Improbable Artificial Intelligence Lab, part of the Computer Science and Artificial Intelligence Laboratory (CSAIL), have developed a legged robotic system that can dribble a soccer ball under the same conditions as humans. The bot used a mixture of onboard sensing and computing to traverse different natural terrains such as sand, gravel, mud, and snow, and adapt to their varied impact on the ball’s motion. Like every committed athlete, “DribbleBot” could get up and recover the ball after falling.&nbsp;</p>

    <p>Programming robots to play soccer has been an active research area for some time. However, the team wanted to automatically learn how to actuate the legs during dribbling, to enable the discovery of hard-to-script skills for responding to diverse terrains like snow, gravel, sand, grass, and pavement. Enter, simulation.&nbsp;</p>

    <p>A robot, ball, and terrain are inside the simulation&nbsp;— a digital twin of the natural world. You can load in the bot and other assets and set physics parameters, and then it handles the forward simulation of the dynamics from there. Four thousand versions of the robot are simulated in parallel in real time, enabling data collection 4,000 times faster than using just one robot. That’s a lot of data.&nbsp;</p>
    <p>The robot starts without knowing how to dribble the ball — it just receives a reward when it does, or negative reinforcement when it messes up. So, it’s essentially trying to figure out what sequence of forces it should apply with its legs. “One aspect of this reinforcement learning approach is that we must design a good reward to facilitate the robot learning a successful dribbling behavior,” says MIT PhD student Gabe Margolis, who co-led the work along with Yandong Ji, research assistant in the Improbable AI Lab. “Once we’ve designed that reward, then it’s practice time for the robot: In real time, it’s a couple of days, and in the simulator, hundreds of days. Over time it learns to get better and better at manipulating the soccer ball to match the desired velocity.”&nbsp;</p>

    <p>The bot could also navigate unfamiliar terrains and recover from falls due to a recovery controller the team built into its system. This controller lets the robot get back up after a fall and switch back to its dribbling controller to continue pursuing the ball, helping it handle out-of-distribution disruptions and terrains.&nbsp;</p>

    <p>”If you look around today, most robots are wheeled. But imagine that there’s a disaster scenario, flooding, or an earthquake, and we want robots to aid humans in the search-and-rescue process. We need the machines to go over terrains that aren’t flat, and wheeled robots can’t traverse those landscapes,” says Pulkit Agrawal, MIT professor, CSAIL principal investigator, and director of Improbable AI Lab.” The whole point of studying legged robots is to go terrains outside the reach of current robotic systems,” he adds. “Our goal in developing algorithms for legged robots is to provide autonomy in challenging and complex terrains that are currently beyond the reach of robotic systems.”&nbsp;</p>

    <p>The fascination with robot quadrupeds and soccer runs deep — Canadian professor Alan Mackworth first noted the idea in a paper entitled “On Seeing Robots,” presented at VI-92, 1992. Japanese researchers later organized a workshop on “Grand Challenges in Artificial Intelligence,” which led to discussions about using soccer to promote science and technology. The project was launched as the Robot J-League a year later, and global fervor quickly ensued. Shortly after that, “RoboCup” was born.&nbsp;</p>

    <p>Compared to walking alone, dribbling a soccer ball imposes more constraints on DribbleBot’s motion and what terrains it can traverse. The robot must adapt its locomotion to apply forces to the ball to&nbsp; dribble. The interaction between the ball and the landscape could be different than the interaction between the robot and the landscape, such as thick grass or pavement. For example, a soccer ball will experience a drag force on grass that is not present on pavement, and an incline will apply an acceleration force, changing the ball’s typical path. However, the bot’s ability to traverse different terrains is often less affected by these differences in dynamics — as long as it doesn’t slip — so the soccer test can be sensitive to variations in terrain that locomotion alone isn’t.&nbsp;</p>

    <p>”Past approaches simplify the dribbling problem, making a modeling assumption of flat, hard ground. The motion is also designed to be more static; the robot isn’t trying to run and manipulate the ball simultaneously,” says Ji. “That’s where more difficult dynamics enter the control problem. We tackled this by extending recent advances that have enabled better outdoor locomotion into this compound task which combines aspects of locomotion and dexterous manipulation together.”</p>

    <p>On the hardware side, the robot has a set of sensors that let it perceive the environment, allowing it to feel where it is, “understand” its position, and “see” some of its surroundings. It has a set of actuators that lets it apply forces and move itself and objects. In between the sensors and actuators sits the computer, or “brain,” tasked with converting sensor data into actions, which it will apply through the motors. When the robot is running on snow, it doesn’t see the snow but can feel it through its motor sensors. But soccer is a trickier feat than walking — so the team leveraged cameras on the robot’s head and body for a new sensory modality of vision, in addition to the new motor skill. And then&nbsp;— we dribble.&nbsp;</p>

    <p>”Our robot can go in the wild because it carries all its sensors, cameras, and compute on board. That required some innovations in terms of getting the whole controller to fit onto this onboard compute,” says Margolis. “That’s one area where learning helps because we can run a lightweight neural network and train it to process noisy sensor data observed by the moving robot. This is in stark contrast with most robots today: Typically a robot arm is mounted on a fixed base and sits on a workbench with a giant computer plugged right into it. Neither the computer nor the sensors are in the robotic arm! So, the whole thing is weighty, hard to move around.”</p>

    <p>There’s still a long way to go in making these robots as agile as their counterparts in nature, and some terrains were challenging for DribbleBot. Currently, the controller is not trained in simulated environments that include slopes or stairs. The robot isn’t perceiving the geometry of the terrain; it’s only estimating its material contact properties, like friction. If there’s a step up, for example, the robot will get stuck — it won’t be able to lift the ball over the step, an area the team wants to explore in the future. The researchers are also excited to apply lessons learned during development of DribbleBot to other tasks that involve combined locomotion and object manipulation, quickly transporting diverse objects from place to place using the legs or arms.</p>

    <p>“DribbleBot is an impressive demonstration of the feasibility of such a system in a complex problem space that requires dynamic whole-body control,” says Vikash Kumar, a research scientist at Facebook AI Research who was not involved in the work. “What’s impressive about DribbleBot is that all sensorimotor skills are synthesized in real time on a low-cost system using onboard computational resources. While it exhibits remarkable agility and coordination, it’s merely ‘kick-off’ for the next era. Game-On!”</p>

    <p>The research is supported by the DARPA Machine Common Sense Program, the MIT-IBM Watson AI Lab, the National Science Foundation Institute of Artificial Intelligence and Fundamental Interactions, the U.S. Air Force Research Laboratory, and the U.S. Air Force Artificial Intelligence Accelerator. A <a href=”https://arxiv.org/abs/2304.01159″ target=”_blank”>paper on the work</a> will be presented at the 2023 IEEE International Conference on Robotics and Automation (ICRA).</p>

  • A glimpse inside Intel

    <p>Intel CEO Pat Gelsinger gave an optimistic account of U.S. semiconductor manufacturing on Friday, telling an MIT audience that the ongoing expansion of his firm’s production capacity would bolster the company over the long term while giving the U.S. more economic and industrial security.</p>

    <p></p>

    <p>“Everything digital runs on semiconductors,” Gelsinger said. “There is no digital without semiconductors today.”</p>

    <p></p>

    <p>In 1990, he noted, 80 percent of the world’s semiconductors were built in the U.S. and Europe, whereas today, 80 percent are built in Asia. To spread manufacturing around more evenly, Intel is adding production in two huge fabrication plants, or “fabs,” one in Arizona and one still being built in Ohio.</p>

    <p></p>

    <p>“We want balanced, resilient supply chains right across the world, and that’s what we’re out to accomplish with the CHIPS Act, and what Intel is driving [at] quite aggressively,” Gelsinger said, speaking before a capacity crowd in MIT’s Wong Auditorium. “Let’s build the fabs where we want them.”</p>

    <p></p>

    <p>The relative lack of chip manufacturing capacity in the U.S., he added, “became acutely visible as we went through the Covid crisis.”</p>

    <p></p>

    <p>The “CHIPS and Science” bill signed into law by President Biden last August provides $52 billion in federal funding for research, design, and manufacturing in the U.S. semiconductor industry, and bolsters the National Science Foundation in the process.</p>

    <p></p>

    <p>“I am confident that we will invent the future,” Gelsinger said. “The question in my mind is: Will we manufacture the future?”</p>

    <p></p>

    <p>Friday’s event was part of the Manufacturing@MIT Distinguished Speaker Series, which involves campus visits and talks by leaders throughout the manufacturing industries.</p>

    <p></p>

    <p>Gelsinger was introduced by Anantha P. Chandrakasan, dean of the MIT School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science. In his remarks, Chandrakasan noted that Gelsinger “is committed to significantly expanding semiconductor manufacturing in the U.S.” and “has strongly supported and driven the CHIPS Act.”</p>

    <p></p>

    <p>Throughout his comments, Gelsinger emphasized that making semiconductors effectively is an ongoing process, subject to continuous improvement and refinement. At Intel, he said, innovations are most meaningful when applied and used on an everyday basis.</p>

    <p></p>

    <p>“This is one of the things the founders of Intel deeply believed,” said Gelsinger, who first joined the firm in 1979, stayed there for 30 years, and rejoined as CEO in 2021. “You can’t innovate and not manufacture. Those are inextricable in our industry. And if we’re going to be an innovator at the heart and soul of the digital future, we must be a manufacturer at scale.”</p>

    <p></p>

    <p>Intel’s ability to scale up its manufacturing is increasing due to a five-year plan Gelsinger is implementing that bolsters the firm’s capital investments — “A $20 billion fab is an extraordinary statement,” he said — with the idea that greater capacity will pay off for the firm over time.</p>

    <p></p>

    <p>“There’s a thin line, being a CEO, between being bold and crazy,” Gelsinger joked. “And right now Wall Street’s not sure which [side] of that line I’m on.”</p>

    <p></p>

    <p>Still, he added, “One of my great days last year was the Ohio [fab] groundbreaking,” which President Biden and others attended. “You could feel the national pride welling up inside you.”</p>

    <p></p>

    <p>Ultimately, he added, “What we’re doing with these projects is reshoring, rebalancing our manufacturing, leading with the core technology for the digital future, and doing it in the U.S. and Europe,” primarily. Intel does have 130,000 employees in 46 countries globally.</p>

    <p></p>

    <p>Gelsinger was joined onstage by three MIT faculty members who engaged in dialogue with him: Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and director of the MIT Computer Science and Artificial Intelligence Laboratory; Vladimir Bulović, director of MIT.nano and the Fariborz Maseeh Professor in Emerging Technology; and Jesús A. del Alamo, the Donner Professor of Science in the Department of Electrical Engineering and Computer Science.</p>

    <p></p>

    <p>Asked by Rus about the trajectory of AI, Gelsinger sounded a bullish note, suggesting that vast new areas of research had been opened up in recent years — which also feeds a demand for more, and more powerful, chips.</p>

    <p></p>

    <p>“It’s a thrilling time to be a computer scientist, but it’s even more thrilling to be a semiconductor manufacturing engineer,” Gelsinger quipped.</p>

    <p></p>

    <p>In dialogue with del Alamo, Gelsinger suggested the industry would still be able to keep improving the processing power of chips at a rapid rate. This general trend is often discussed in terms of “Moore’s Law,” named after Intel co-founder and former CEO Gordon Moore, who forecast that the number of transistors on a chip could keep doubling every two years.</p>

    <p></p>

    <p>“I think we’ve declared the death of Moore’s Law for about three or four decades now,” Gelsinger said. However, he added, “We keep solving problems that allow us to keep rolling [with] about a decade in front of us,” referring to the length of time over which computing power will keep expanding significantly, according to a reasonable current forecast.</p>

    <p></p>

    <p>Gelsinger also emphasized the opportunities available at Intel for workers across a wide range of backgrounds in science and engineering. While discussing with Bulović the interaction between academic research and large-scale chip manufacturing, Gelsinger noted that MIT has “incredible students, incredible minds, and I would hope that every one of them gets into the Nano lab and falls in love again with building hardware, building silicon at scale.”</p>

    <p></p>

    <p>The event was sponsored by the Department of Mechanical Engineering, the Department of Political Science, the Industrial Performance Center, MIT.nano, Machine Intelligence for Manufacturing and Operations,&nbsp;Leaders for Global Operations, the Laboratory for Manufacturing and Productivity, and Mission Innovation X.</p>

  • Fiber “barcodes” can make clothing labels that last

    <p>In the United States, an estimated <a href=”https://www.epa.gov/facts-and-figures-about-materials-waste-and-recycling/textiles-material-specific-data” target=”_blank”>15 million tons</a> of textiles end up in landfills or are burned every year. This waste, amounting to 85 percent of the textiles produced in a year, is a growing environmental problem. In 2022, Massachusetts became the first state to enact a law banning the disposal of textiles in the trash, aiming to up recycling percentages.</p>

    <p>But recycling textiles isn’t always easy. Those that can’t be resold as-is are sent to facilities to be sorted by fabric type. Sorting by hand is labor intensive, made harder by worn-out or missing labels. More advanced techniques that analyze a fabric’s chemistry often aren’t precise enough to identify materials in fabric blends, which make up most clothing.</p>

    <p>To improve this sorting process, a team from MIT Lincoln Laboratory and the University of Michigan offer a new way to label fabrics: by weaving fibers with engineered reflectivity into them. This fiber is only reflective under certain infrared light. Depending on the wavelengths of light that the fiber reflects when scanned, recyclers would know which type of fabric the fiber represents. In essence, the fiber works like an optical barcode to identify a product.</p>

    <p>“Having a way to easily identify fabric types and sort them as they’re coming through could help make recycling processes scale up. We want to find ways to identify materials for another use after the life cycle of the garment,” says Erin Doran, a co-author of <a href=”https://onlinelibrary.wiley.com/doi/10.1002/admt.202201099″ target=”_blank”>the team’s study</a>, which was recently published in <em>Advanced Materials Technologies</em>.</p>

    <p><strong>Pulling threads</strong></p>

    <p>Doran is a textile specialist at the <a href=”https://www.ll.mit.edu/about/facilities/defense-fabric-discovery-center” target=”_blank”>Defense Fabric Discovery Center</a> (DFDC) at Lincoln Laboratory. There, she works with researchers in the <a href=”https://www.ll.mit.edu/r-d/advanced-technology/advanced-materials-and-microsystems” target=”_blank”>Advanced Materials and Microsystems Group</a> to make “fabrics of the future” by integrating fibers ingrained with tiny electronics and sensors.</p>

    <p>At the University of Michigan, <a href=”https://mse.engin.umich.edu/people/mshtein/group/brian-iezzi” target=”_blank”>Brian Iezzi</a>, the study’s lead author, was investigating ways to improve textile recyclability. His work in U-Michigan’s Shtein Lab focuses on applying photonics to fiber-based devices. One such device is called a structural-color fiber, a type of photonic fiber first <a href=”https://news.mit.edu/2002/mirrorfabric” target=”_blank”>developed at MIT</a> more than 20 years ago by Professor Yoel Fink’s research team. It’s one area of expertise today at the DFDC.</p>

    <p>“It’s a fiber that acts like a perfect mirror,” says DFDC researcher Bradford Perkins, a co-author of the study. “By layering certain materials, you can design this mirror to reflect specific wavelengths. In this case, you’d want reflections at wavelengths that stand out from the optical signatures of the other materials in your fabric, which tend to be dark because common fabric materials absorb infrared radiation.”</p>

    <p>The fiber starts out as a block of polymer called a preform. The team carefully constructed the preform to contain more than 50 alternating layers of acrylic and polycarbonate. The preform is then heated and pulled like taffy from the top of a tower. Each layer ends up being less than a micron thick, and in combination produce a fiber that is the same size as a conventional yarn in fabric.</p>

    <p>While each individual layer is clear, the pairing of the two materials reflects and absorbs light to create an optical effect that can look like color. It’s the same effect that gives butterfly wings their rich, shimmering colors.</p>

    <p>“Butterfly wings are one example of structural color in nature,” says co-author Tairan Wang, also from Lincoln Laboratory. “When you look at them very closely, they’re really a sheath of material with nanostructured patterns that scatter light, similar to what we’re doing with the fibers.”</p>

    <p>By controlling the speed at which the fibers are drawn, researchers can “tune” them to reflect and absorb specific, periodic ranges of wavelengths — creating a unique optical barcode in each fiber. This barcode can then be assigned to corresponding fabric types, one symbolizing cotton, for example, and another polyester. The fibers would be woven into fabrics when the fabrics are manufactured, before being put to use in a garment and eventually recycled.</p>

    <p>Unlike the eye-catching designs of butterfly wings, the fibers are not meant to be showy. “They would make up less than a few percent of the fabric. Nobody would be able to tell that they’re there until they had an infrared detector,” Perkins says.</p>

    <p>A detector could be adapted from the kind used to sort plastics in the recycling industry, the researchers say. Those detectors similarly use infrared sensing to identify the unique optical signatures of different polymers.</p>

    <p><strong>Trying it on in the future</strong></p>

    <p>Today, the team has applied for patent protection on their technology, and Iezzi is evaluating ways to move toward commercialization. The fibers produced in this study are still slightly thick relative to clothing fibers, so thinning them more while retaining their reflectivity at the desired wavelengths is a continued area of research.</p>

    <p>Another avenue to explore is making the fibers more akin to sewing thread. This way, they could be sewn into a garment in cases when weaving them into a certain fabric type could affect its look or feel.</p>

    <p>The researchers are also thinking about how structural-color fibers could help tackle other environmental problems in the textile industry, like toxic waste from dyes. One could imagine using such fibers to make fabrics that are inherently imbued with color that never fades.</p>

    <p>“It’s important for us to consider recyclability as the electronic-textile market expands, too. This idea can open avenues for recovering chips and metals during the textile recycling process.” Doran says. “Sustainability is a big part of the future, and it’s been exciting to collaborate on this vision.”</p>

  • 3 Questions: How automation and good jobs can co-exist

    <p><em>In 2018, MIT convened its Task Force on the Work of the Future, which concluded in a <a href=”https://news.mit.edu/2020/work-of-future-final-report-1117″ target=”_blank”>2020 report</a> that while new technologies were not necessarily going to massively wipe out employment, smart practices and policies would be necessary to let automation complement good jobs. Today a successor group is continuing the task force’s effort: The Work of the Future Initiative, whose co-directors are Julie Shah, the H.N. Slater Professor of Aeronautics and Astronautics at MIT, and Ben Armstrong, executive director and research scientist at MIT’s Industrial Performance Center. </em></p>

    <p></p>

    <p><em>The Work of the Future Initiative is conducting research onsite at manufacturing firms and generating collaborative work on campus. Meanwhile, in a recent </em>Harvard Business Review<em> article, Shah and Armstrong outlined their vision of “positive-sum automation” in manufacturing, in which robots and automation co-exist with worker-driven input, rather than wipe out workers. They spoke with </em>MIT News<em> about their ideas.</em></p>

    <p></p>

    <p><strong>Q:</strong> Let’s start with your perspective about how technologies and workers can complement each other. What is “positive-sum automation,” this core idea of the Work of the Future Initiative?</p>

    <p></p>

    <p><strong>Ben Armstrong:</strong> One thing Julie and I both noticed from visiting factories and studying manufacturers, and that Julie noticed from her work developing robotics technologies, is the tradeoff between productivity advances, which is often the goal of automation, and flexibility. When firms become more productive in repetitive processes, they often lose flexibility. It becomes costlier to change production processes, or make adjustments for workers, even on the level of ergonomics. In short, “zero-sum automation” is a tradeoff, while “positive-sum automation” is using different technology design and strategy to get both productivity and flexibility.</p>

    <p></p>

    <p>This isn’t just important for firm performance, but for workers. A lot of firms adopting robots actually hire more workers. It’s an open question whether those jobs become better. So, by promoting flexibility as part of the automation process, that can be better for workers, including more worker input.</p>

    <p></p>

    <p><strong>Julie Shah:</strong> I develop AI-enabled robots and have worked for much of my career in manufacturing, trying to cut against this paradigm where you make a choice between either a human doing the job or a robot doing the job, which is by definition zero-sum. It requires a very intentional effort in shaping the technology to make flexible sytems that improve productivity.</p>

    <p></p>

    <p><strong>Q:</strong> How often do firms not realize that automation can lead to this kind of tradeoff?</p>

    <p></p>

    <p><strong>Shah:</strong> The mistake is nearly ubiquitous. But as we toured firms for our research, we saw the ones that are successful at adopting and scaling the use of the robots have a very different mindset. The traditional way you think of labor displacement is, if I put this robot in, I take this person out. We were just in a factory where a worker is overseeing multiple robots, and he said, “Because my job got easier, I can now timeshare between multiple machines, and instead of being crazy busy, I can spend 20 percent of my time thinking about how to improve all of this.” The learning curve in the factory is driven by people and their ability to innovate.</p>

    <p></p>

    <p><strong>Armstrong:</strong> It’s sometimes hard to measure the impact of a technology before it’s deployed. You don’t really know what hidden costs or benefits might emerge. Workers spending time more creatively on problems becomes a downstream benefit. In health care, for instance, automating administrative tasks might meet resistance, but in our interviews, workers talked about how they could now focus on the most interesting parts of their jobs, so we see an outcome that’s good for workers and also potentially good for continuous improvement at these firms.</p>

    <p></p>

    <p>The focus of the [<em>Harvard Business Review</em>] piece was hardware technologies, but firms can be very creative in how they connect their front-office software used to sell their product with the software that controls their machines. Another piece I’ve been interested in is logistics and warehousing, which in some ways has seen far greater advances in robotics and automation, and where there’s a lot of potential to improve job quality for people.</p>

    <p></p>

    <p><strong>Q:</strong> In its current incarnation, what does the Work of the Future Initiative consist of?</p>

    <p></p>

    <p><strong>Shah:</strong> The Work of the Future Initiative has what we call an “automation clinic,” where we bring researchers and students out to firms in manufacturing, to look at how companies might break out of their zero-sum choices and to showcase those success stories. But the initiative is broader than that. There are seed research efforts and other ways we engage faculty and research across the Institute.</p>

    <p></p>

    <p><strong>Armstrong:</strong> We’re developing an open library of case studies, and we’re always looking for new places to visit and new industry partners to learn from. And we are looking for more structured opportunities for campus discussions. The Work of the Future Initiative is not a closed community, and we would very much like to reach out to people at MIT. It’s exciting and challenging to have people who run a robotics lab working with social scientists. It happens at MIT but might not happen at other places. We’re trying to spur more collaborations among people who look at the same questions in different ways.</p>

    <p></p>

    <p><strong>Shah:</strong> When the Work of the Future task force started in 2018, there were billboards on I-90 telling people they’d better retire now [due to robots]. But what’s happening is much more nuanced. There are all these different possible futures as you deploy these technologies. It’s a large and long-term research agenda to ask about the organizational decisions that produce positives outcomes for firms and workers. That’s very motivating, I think, for people doing the engineering work, and involves broad engagement, and that’s what we’re aiming for.</p>