Nvidia, Notre Dame push AI transformers to the edge - Protocol

2022-05-20 06:58:20 By : Mr. Yidaxin Shenzhen

Chipmakers like Nvidia and researchers from Notre Dame want to make huge transformers like large natural-language-process models speedier, more nimble and more energy efficient.

"We want it smaller and smaller, and it has to be more energy efficient.”

Transformer networks, colloquially known to deep-learning practitioners and computer engineers as “transformers,” are all the rage in AI. Over the last few years, these models, known for their massive size, large amount of data inputs, big scale of parameters — and, by extension, high carbon footprint and cost — have grown in favor over other types of neural network architectures.

Some transformers, particularly some open-source, large natural-language-processing transformer models, even have names that are recognizable to people outside AI, such as GPT-3 and BERT. They’re used across audio-, video- and computer-vision-related tasks, drug discovery and more.

Now chipmakers and researchers want to make them speedier and more nimble.

“It’s interesting how fast technology for neural networks changes. Four years ago, everybody was using these recurrent neural networks for these language models and then the attention paper was introduced, and all of a sudden, everybody is using transformers,” said Bill Dally, chief scientist at Nvidia during an AI conference last week held by Stanford’s HAI. Dally was referring to an influential 2017 Google research paper presenting an innovative architecture forming the backbone of transformer networks that is reliant on “attention mechanisms” or “self-attention,” a new way to process the data inputs and outputs of models.

“The world pivoted in a matter of a few months and everything changed,” Dally said. To meet the growing interest in transformer use, in March the AI chip giant introduced its Hopper h100 transformer engine to streamline transformer model workloads.

But some researchers are pushing for even more. There’s talk not only of making compute- and energy-hungry transformers more efficient, but of eventually upgrading their design so they can process fresh data in edge devices without having to make the round trip to process the data in the cloud.

A group of researchers from Notre Dame and China’s Zhejiang University presented a way to reduce memory-processing bottlenecks and computational and energy consumption requirements in an April paper. The “iMTransformer” approach is a transformer accelerator, which works to decrease memory transfer needs by computing in-memory, and reduces the number of operations required by caching reusable model parameters.

Right now the trend is to bulk up transformers so the models get large enough to take on increasingly complex tasks, said Ana Franchesca Laguna, a computer science and engineering PhD at Notre Dame. When it comes to large natural-language-processing models, she said, “It’s the difference between a sentence or a paragraph and a book.” But, she added, “The bigger the transformers are, your energy footprint also increases.”

Using an accelerator like the iMTransformer could help to pare down that footprint, and, in the future, create transformer models that could ingest, process and learn from new data in edge devices. “Having the model closer to you would be really helpful. You could have it in your phone, for example, so it would be more accessible for edge devices,” she said.

That means IoT devices such as Amazon’s Alexa, Google Home or factory equipment maintenance sensors could process voice or other data in the device rather than having to send it to the cloud, which takes more time and more compute power, and could expose the data to possible privacy breaches, Laguna said.

IBM also introduced an AI accelerator called RAPID last year. “Scaling the performance of AI accelerators across generations is pivotal to their success in commercial deployments,” wrote the company’s researchers in a paper. “The intrinsic error-resilient nature of AI workloads present a unique opportunity for performance/energy improvement through precision scaling.”

Farah Papaioannou, co-founder and president at Edgeworx, said she thinks of the edge as anything outside the cloud. “What we’re seeing of our customers, they’re deploying these AI models you want to train and update on a regular basis, so having the ability to manage that capability and update that on a much faster basis [is definitely important],” she said during a 2020 Protocol event about computing at the edge.

Laguna uses a work-from-home analogy when thinking of the benefits of processing data for AI models at the edge.

“[Instead of] commuting from your home to the office, you actually work from home. It’s all in the same place, so it saves a lot of energy,” she said. She said she hopes research like hers will enable people to build and use transformers in a more cost- and energy-efficient way. “We want it on our edge devices. We want it smaller and smaller, and it has to be more energy efficient.”

Laguna and the other researchers she worked with tested their accelerator approach using smaller chips, and then extrapolated their findings to estimate how the process would work at a larger scale. However, Laguna said that turning the small-scale project into a reality at a larger scale will require customized, larger chips.

Ultimately, she hopes it spurs investment. A goal of the project, she said, “is to convince people that this is worthy of investing in so we can create chips so we can create these types of networks.”

That investor interest might just be there. AI is spurring increases in investments in chips for specific use cases. According to data from PitchBook, global sales of AI chips rose 60% last year to $35.9 billion compared to 2020. Around half of that total came from specialized AI chips in mobile phones.

Systems designed to operate at the edge with less memory rather than in the cloud could facilitate AI-based applications that can respond to new information in real time, said Jarno Kartela, global head of AI Advisory at consultancy Thoughtworks.

“What if you can build systems that by themselves learn in real time and learn by interaction?” he said. “Those systems, you don’t need to run them on cloud environments only with massive infrastructure — you can run them virtually anywhere.”

Are you keeping up with the latest cloud developments? Get the Enterprise team's newsletter every Monday and Thursday.

Your information will be used in accordance with our Privacy Policy

Thank you for signing up. Please check your inbox to verify your email.

Sorry, something went wrong. Please try again.

A login link has been emailed to you - please check your inbox.

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Increasingly widespread EV adoption is starting to displace the use of oil, but there's still a lot of work to do.

More electric mopeds on the road could be an oil demand game-changer.

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Electric vehicles are starting to make a serious dent in oil use.

Last year, EVs displaced roughly 1.5 million barrels per day, according to a new analysis from BloombergNEF. That is more than double the share EVs displaced in 2015. The majority of the displacement is coming from an unlikely source.

Two- and three-wheelers — not e-bikes, but mopeds, scooters and motorcycles — have dominated the avoided oil use. Those EV options are especially prevalent in Asia, where they have been adopted rapidly. In 2021, the share of oil displaced by these smaller vehicles alone hit 1 million barrels per day. Meanwhile, the shares displaced by buses and passenger vehicles remain much smaller, but have increased steadily for the last six years.

This is undoubtedly good climate news. But it comes with a caveat; the amount of oil use displaced represents only a small share (3.3%) of the total global demand for transportation fuel, which landed at roughly 43.7 million barrels of oil per day in 2021. Oil demand has to fall 37% by the end of this decade to keep the 1.5-degree-Celsius target in reach, according to the Intergovernmental Panel on Climate Change’s 2018 report. Widespread EV adoption will play a huge role in determining whether doing so is possible.

This data comes as a part of BNEF’s work for the Zero Emission Vehicle Transition Council, an international forum to accelerate the transition to electric and other zero-emissions vehicles. The latest update found that global passenger EV sales grew by 103% in 2021, to nearly 6.6 million cars sold. These accounted for 13% of total passenger vehicle sales in the fourth quarter of 2021, including plug-in hybrids.

According to BNEF, the displaced oil demand is nearly equivalent to one-fifth of Russia’s total exports before its invasion of Ukraine. The European Commission recently proposed banning all Russian oil and petroleum imports, as Europe tries to both punish the Kremlin and speed its own transition to renewables.

As the new data shows, creating policies that get more EVs of all shapes and sizes on the road — and the infrastructure to charge them — will also reduce fossil fuel dependence.

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Imagine: You’re the leader of a real estate team at a restaurant brand looking to open a new location in Manhattan. You have two options you’re evaluating: one site in SoHo, and another site in the Flatiron neighborhood. Which do you choose?

Companies that need to make these types of decisions leverage foot traffic patterns coupled with additional data sources to build a sound approach to real estate and investment decisions. Below, we take a closer look at points of interest and foot traffic patterns to demonstrate how location data can be leveraged to inform better site selecti­on strategies.

Here’s how foot traffic data can impact site selection or real-estate decisions.

Look at your competitive set: Identify current venues in a neighborhood or area to determine where there might be white space and to quantify the competitive landscape. Analyze your overall competitive set (e.g., in this report we looked at all restaurants) as well as more specific, relevant categories of venues (e.g., in this report we looked at cafes). Know which places your prospective customers go now, and where you might have an opportunity to take market share or position yourself alongside businesses that provide synergies.

Know whether your consumer traffic would come from tourists, or locals: Classify tourists versus locals by looking at individuals with home ZIP codes more than 120 miles away in your analysis to better understand the catchment area (i.e., where consumers are coming from).

Know more about consumers in your neighborhood: Analyze the demographics of consumers in a particular neighborhood to understand the types of people a prospective site might draw so that you can select the optimal location based on your target audience.

Uncover changes in visit patterns over time, and within a typical week: Look at a particular neighborhood over time in order to capitalize on trends, selecting a site where traffic may be on the rise. Compare visitation patterns by neighborhood to understand the traffic you might expect to see throughout the week at a given site, informing and validating (or invalidating) your projections. Know what day of week experiences the most natural footfall traffic.

Understand the trends and what your consumers like: It’s critical to know what consumers are looking for, how they spend their time and what they like now and into the future.

Use data-visualization platforms and tools to make insights easy: Data-visualization platforms make complex information and insights easier to understand and ultimately react to. You’ll see companies that adopt data visualization are empowered and can spot emerging trends and speed reaction time.

Different target audiences with different needs

SoHo: Consumers visiting restaurants in SoHo are primarily locals (83%) ages 25-34 (44%). Restaurants in this area attract super shoppers, affluent socialites, health-conscious consumers and a cultured and artsy crowd.

Flatiron: People visiting restaurants in Flatiron are primarily locals (86%) ages 25-34 (46%). Restaurants in this area attract health-conscious consumers, corporate professionals, college students and people who crave unique experiences.

Visitation patterns and staffing/hours of operation vary

Soho: A restaurant in SoHo may struggle to draw consistent foot traffic throughout the earlier part of the day and week: Restaurants in SoHo rely heavily on weekend visits (38% of total weekly visits) in the late afternoons (60% of total daily visits occur after 3 p.m.).

Flatiron: A restaurant in Flatiron may struggle to draw consistent foot traffic throughout later day-parts and weekends: Restaurants in Flatiron rely heavily on weekday visits (70% of total weekly visits) in the earlier part of the day (45% of total daily visits occur before 3 p.m.).

SoHo: A new restaurant in NYC's SoHo neighborhood will face tough competition with more than 435 restaurants in the area, including over 48 cafes. Top-visited restaurants in this area include Gitano, Prince Street Pizza and Thai Diner.

Flatiron: A new restaurant might face less competition in Manhattan's Flatiron neighborhood, with roughly 267 restaurants in the area, including only 25 cafes. Top-visited restaurants in this area include Eataly, Shake Shack and The Smith.

While a new restaurant in NYC's Flatiron neighborhood may face less competition compared to a new restaurant in SoHo, location data verifies what it takes to be successful in both neighborhoods.

In order to be successful in Flatiron, a restaurant will need to draw a weekday lunch crowd with healthy offerings and a work-friendly setting for professionals; to stand out among nearly double the restaurants in SoHo, a new restaurant should lean into arts and culture with a design-forward setting, and focus on evening and weekend offerings.

Read the full report to better understand the role of location data in uncovering trends in consumer behavior, assessing the competitive landscape and unlocking unique opportunities for venue expansion.

AI and automated software that promises to make the web more accessible abounds, but people with disabilities and those who regularly test for digital accessibility problems say it can only go so far.

The everyday obstacles blocking people with disabilities from a satisfying digital experience are immense.

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

“It’s a lot to listen to a robot all day long,” said Tina Pinedo, communications director at Disability Rights Oregon, a group that works to promote and defend the rights of people with disabilities.

But listening to a machine is exactly what many people with visual impairments do while using screen reading tools to accomplish everyday online tasks such as paying bills or ordering groceries from an ecommerce site.

“There are not enough web developers or people who actually take the time to listen to what their website sounds like to a blind person. It’s auditorily exhausting,” said Pinedo.

Whether struggling to comprehend a screen reader barking out dynamic updates to a website, trying to make sense of poorly written video captions or watching out for fast-moving imagery that could induce a seizure, the everyday obstacles blocking people with disabilities from a satisfying digital experience are immense.

Needless to say, technology companies have tried to step in, often promising more than they deliver to users and businesses hoping that automated tools can break down barriers to accessibility. Although automated tech used to check website designs for accessibility flaws have been around for some time, companies such as Evinced claim that sophisticated AI not only does a better job of automatically finding and helping correct accessibility problems, but can do it for large enterprises that need to manage thousands of website pages and app content.

Still, people with disabilities and those who regularly test for web accessibility problems say automated systems and AI can only go so far. “The big danger is thinking that some type of automation can replace a real person going through your website, and basically denying people of their experience on your website, and that’s a big problem,” Pinedo said.

For a global corporation such as Capital One, relying on a manual process to catch accessibility issues is a losing battle.

“We test our entire digital footprint every month. That’s heavily reliant on automation as we’re testing almost 20,000 webpages,” said Mark Penicook, director of Accessibility at the banking and credit card company, whose digital accessibility team is responsible for all digital experiences across Capital One including websites, mobile apps and electronic messaging in the U.S., the U.K. and Canada.

Accessibility isn’t taught in computer science.

Even though Capital One has a team of people dedicated to the effort, Penicook said he has had to work to raise awareness about digital accessibility among the company’s web developers. “Accessibility isn’t taught in computer science,” Penicook told Protocol. “One of the first things that we do is start teaching them about accessibility.”

One way the company does that is by celebrating Global Accessibility Awareness Day each year, Penicook said. Held on Thursday, the annual worldwide event is intended to educate people about digital access and inclusion for those with disabilities and impairments.

Before Capital One gave Evinced’s software a try around 2018, its accessibility evaluations for new software releases or features relied on manual review and other tools. Using Evinced’s software, Penicook said the financial services company’s accessibility testing takes hours rather than weeks, and Capital One’s engineers and developers use the system throughout their internal software development testing process.

It was enough to convince Capital One to invest in Evinced through its venture arm, Capital One Ventures. Microsoft’s venture group, M12, also joined a $17 million funding round for Evinced last year.

Evinced’s software automatically scans webpages and other content, and then applies computer vision and visual analysis AI to detect problems. The software might discover a lack of contrast between font and background colors that make it difficult for people with vision impairments like color blindness to read. The system might find images that do not have alt text, the metadata that screen readers use to explain what’s in a photo or illustration. Rather than pointing out individual problems, the software uses machine learning to find patterns that indicate when the same type of problem is happening in several places and suggests a way to correct it.

“It automatically tells you, instead of a thousand issues, it’s actually one issue,” said Navin Thadani, co-founder and CEO of Evinced.

The software also takes context into account, factoring in the purpose of a site feature or considering the various operating systems or screen-reader technologies that people might use when visiting a webpage or other content. For instance, it identifies user design features that might be most accessible for a specific purpose, such as a button to enable a bill payment transaction rather than a link.

Some companies use tools typically referred to as “overlays” to check for accessibility problems. Many of those systems are web plug-ins that add a layer of automation on top of existing sites to enable modifications tailored to peoples’ specific requirements. One product that uses computer vision and machine learning, accessiBe, allows people with epilepsy to choose an option that automatically stops all animated images and videos on a site before they could pose a risk of seizure. The company raised $28 million in venture capital funding last year. Another widget from TruAbilities offers an option that limits distracting page elements to allow people with neurodevelopmental disorders to focus on the most important components of a webpage.

Some overlay tools have been heavily criticized for adding new annoyances to the web experience and providing surface-level responses to problems that deserve more robust solutions. Some overlay tech providers have “pretty brazen guarantees,” said Chase Aucoin, chief architect at TPGi, a company that provides accessibility automation tools and consultation services to customers, including software development monitoring and product design assessments for web development teams.

“[Overlays] give a false sense of security from a risk perspective to the end user,” said Aucoin, who himself experiences motor impairment. “It’s just trying to slap a bunch of paint on top of the problem.”

In general, complicated site designs or interfaces that automatically hop to a new page section or open a new window can create a chaotic experience for people using screen readers, Aucoin said. “A big thing now is just cognitive; how hard is this thing for somebody to understand what’s going on?” he said.

Even more sophisticated AI-based accessibility technologies don’t address every disability issue. For instance, people with an array of disabilities either need or prefer to view videos with captions, rather than having sound enabled. However, although automated captions for videos have improved over the years, “captions that are computer-generated without human review can be really terrible,” said Karawynn Long, an autistic writer with central auditory processing disorder and hyperlexia, a hyperfocus on written language.

“I always appreciate when written transcripts are included as an option, but auto-generated ones fall woefully short, especially because they don't include good indications of non-linguistic elements of the media,” Long said.

Some companies that make accessibility overlay software take a blunt approach to selling it: The threat of lawsuits. “There was a time when it was possible to get away with having a website that didn’t work for people with disabilities. But those days are long gone, and ADA regulations for web accessibility are being enforced in court,” states UserWay on its website, noting that its “AI-powered” software is “the easiest way to avoid lawsuits.”

Much like in the digital privacy technology sector, the conversation around digital accessibility tech often emerges through a prism of legal threats and regulatory compliance, while incentives such as helpfulness and inclusion get less attention.

AccessiBe’s homepage touts its tech as “The #1 Automated Web Accessibility Solution” for regulatory compliance. Evinced, on the other hand, seems to play down the compliance-related selling points. Its homepage slogan? “Doing the right thing just became a lot easier.”

“It makes good financial sense to do this outside of a compliance risk perspective. Disabled customers are incredibly loyal,” said Aucoin.

However, concerns about legal risks are warranted. There were nearly 27% more lawsuits related to digital accessibility filed in the last quarter of 2021 compared to the last quarter of 2020, according to Accessibility.com, a website providing digital accessibility information that tracks accessibility lawsuits in state and federal jurisdictions throughout the U.S. and is funded through sponsorships from digital accessibility businesses. According to the site’s 2021 report, 2,352 web accessibility lawsuits were filed in 2021, up from 2,058 cases filed nationwide in 2020.

While legal threats mount, the pandemic spurred on additional pressure from federal regulators. The Department of Justice published guidance in March reminding businesses that web accessibility for people with disabilities is “a priority,” and that the agency is watching to see that companies ensure digital services are accessible to everyone, including digital public health services.

The DOJ highlighted recent settlements with grocery and pharmacy chains including Hy-Vee, Kroger, Meijer and Rite Aid, all of which the agency claimed violated the Americans with Disabilities Act by failing to make their COVID-19 vaccination registration portals accessible to some people with disabilities, including those who use screen reader software.

The Justice Department has emphasized its position that the ADA’s requirements apply to all services and activities of state and local governments — such as applying for an absentee election ballot or filing taxes — as well as online public accommodations such as ordering food or buying other goods or services.

The World Wide Web Consortium, the global body that establishes technical standards for the internet, has updated its Web Content Accessibility Guidelines over the years. Those standards “allowed for tech companies to say, ‘We can automate this — all you have to do is run scripts now based on those guidelines,’” said Lina Rivera, associate director of Quality Assurance at Rapp, a digital creative marketing agency. However, Rivera said, those guidelines should be just the starting point. “What is not being said here? What is not being covered? Having those conversations with the different communities impacted by these guidelines is [what’s needed],” Rivera said.

Rivera and the QA team at Rapp use automated tools to test marketing emails and websites the agency produces for advertisers, including tools that evaluate how emails will be read by screen readers.

But there are limits to the efficacy of simply automatically turning off animations or checking for image alt tags in webpage code, accessibility experts say. “Automation can only do so much,” Aucoin said. For example, he said although an automated tool might detect that code includes alt text language so screen readers can verbalize what’s in an image, “We can’t say if it is a good description.”

Rather than relying on its web development coders to write alt text, Rapp’s creative team has been involved over the past year in writing copy for use in descriptive image code in order to provide a better experience for visually impaired people, Rivera said. “We created an alt text guide,” she added.

Even champions of AI-based tech for accessibility recognize the need to involve people in the process.

“There’s always a role for manual testing,” said Penicook, who said his group at Capital One conducts manual tests and monitors existing webpages and content. Assistance from tools like Evinced’s software allows his team to cover more bases, faster, he said.

“Testing for accessibility is not a straightforward thing at all. It has to be a combination of manual and automated,” said Rivera.

Pinedo said companies should be receptive to reviews people with disabilities give about their digital experiences, too, and be open to making relevant changes.

“There’s no way to anticipate people with multiple types of disabilities that may be using older types of [screen readers or computers],” she said. “The best thing you can do is to have an accessibility statement and be receptive to when something isn’t accessible to people in some way.”

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Jeremy Allaire remains upbeat about stablecoins despite the UST wipeout, he told Protocol in an interview.

Allaire said what really caught him by surprise was “how fast the death spiral happened and how violent of a value destruction it was.”

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.

Circle CEO Jeremy Allaire said he saw the UST meltdown coming about six months ago, long before the stablecoin crash rocked the crypto world.

“This was a house of cards,” he told Protocol. “It was very clear that it was unsustainable and that there would be a very high risk of a death spiral.”

On Wednesday, the UST stablecoin, which is supposed to maintain a one-to-one peg to the U.S. dollar, had flatlined at roughly a penny, while its sister cryptocurrency, the luna, saw its total market value plunge to about $1 billion, down from more than $36 billion at its peak.

Allaire said what really caught him by surprise was “how fast the death spiral happened and how violent of a value destruction it was.”

In an interview with Protocol, Allaire talked about what the UST crash means for crypto, the future of stablecoins and his views of SEC Chair Gary Gensler’s leadership.

This interview has been edited for clarity and brevity.

What surprised you most in the UST and luna meltdown?

Our own internal analysis roughly six months ago was that this was a house of cards. It was very clear that it was unsustainable and that there would be a very high risk of a death spiral.

There were two things that surprised me. One was just simply how fast the death spiral happened and how violent of a value destruction it was. I was just speechless, just literally had never seen something evaporate that much in 72 hours.

The more surprising thing to me, what really came home to me when the total collapse occurred, was how many highly intelligent people, how many very savvy crypto leaders, investors, smart money, analysts, media, journalists, academics bought the hype. How many people went along with it, even though it was very clear in a basic, factual, data-driven manner that this was very high risk, and not sustainable.

It was just very clear and somehow there was a kind of collective hallucination. It's just surprising how many people were kind of wanting to believe the hype and meme this into existence.

That, to me, is one of the really most important things to reflect on here, which is that a lot of people have got to look in the mirror and ask themselves: What was it that led them to support such a thing and destroy so many people's lives?

There were reports of people who lost a lot of money and some even contemplated suicide. How did you react to those reports?

The fact that there's people who've had financial ruin is not a surprise at all. If you evaporate $50 billion or whatever it was in less than seven days, that's highly likely.

During that day, one of my colleagues was in a Reddit subreddit. After watching people talk about how they had nothing left to live for and so on and I was hearing that — it was really upsetting.

It just comes back to the point I was making, which is I do not think it's sincere or high integrity for leaders to just say, “Well, everyone knew the risks.” Because there were promoters and I think fairly intelligent people who indulged in this. Frankly, I'm very disappointed in a lot of people.

You’re a leader in the industry. Did you try to talk about this with Do Kwon or any of the folks from that ecosystem, to maybe warn them or gain insight into what they're trying to do?

Everything's out in the open, in public. People were certainly raising questions; I think the interesting thing was Do Kwon, who's a so-called leader, was just attacking people ad hominem. I think there's a culture of fear of not wanting to be attacked, publicly, or humiliated by Do Kwon. And that's shameful.

That's not leadership. That's cowardice and insecurity. So I think that's also something for people to reflect on. In podcast interviews I would be asked about the topic, I would sort of outline what I think is problematic. But I'm one small voice. [Protocol reached out to Do Kwon through his Luna Foundation Guard nonprofit, but did not immediately get a response.]

You’re head of the second-biggest stablecoin, and stablecoins have taken a hit in terms of reputation. Are there things that you're trying to do now with other industry leaders in terms of making changes in the industry or proposals for regulations?

We're doing a lot of what we've always done, which is try and build the most trusted, most transparent, most compliant model possible for this. There's a reason why there's been a flight to quality. There's a reason why over the past week, USDC has seen strengthening, material strengthening. And you've seen other perceived higher-risk assets shedding enormous amounts of their assets.

I'm communicating a lot with other leaders, people who run important pieces of the ecosystem, to ask them what they're thinking. Has this changed how they think about risk? There is really, I think, a change taking place.

We've also spent a fair amount of time with policymakers who are trying to understand this and want to get things right, because stablecoin regulation has been on the table for a year.

The Treasury Department and the White House issued a report less than nine months ago, making it clear that stablecoins had risks of runs, that it was an urgent issue for Congress to deal with. And we agreed with that report.

After that, Congress started working on it. So we started working closely with Republicans, with Democrats in the House and the Senate. There's been a huge amount of work. So in the weeks even prior to this, we have really good draft legislation that's in front of committees. You've got bipartisan engagement on this.

If anything, we're doubling down on that. It's an opportunity for the United States to not only address some of the risks, but frankly, provide the kind of assurances that market participants need so they can build on this.

Ultimately, the promise of this is that it's a new dollar-market infrastructure that's built on the internet that households and firms and financial institutions can depend on and build on. All together, it just brought more light into how you do things right, versus how to do these kinds of crazy things.

An analyst told me that stablecoins should be boring, that stablecoin companies should be like the Federal Reserve. They should be making cryptocurrencies that [are] clearly pegged to the U.S. dollar, not offering tokens that offer attractive yields. That was a key point highlighted in the UST stablecoin controversy. People were stunned that users were promised 20% yield for depositing UST in Anchor Protocol.

Yeah, USDC is boring. USDC is a store of value, an electronic money instrument. One can merely create or redeem these through Circle. They can go through our partners like Coinbase or FTX, or others, and it doesn't do anything else. It's held in cash and short-term government bonds. It's extremely safe. It’s regulated. It’s audited. It's all these things, right. So it's boring. It does what it needs to do which is provide a reliable dollar digital currency that runs on the internet.

But you also have products that offer interest.

That's separate. USDC itself, as a digital currency instrument, is boring, to use your phrase. There are borrowing markets for USDC. So those are interest rate markets. You can interact with those interest rate markets through decentralized protocols. You can interact with those interest rate markets through centralized services.

There's an interest rate people are paying to borrow and there's collateral against that as well.

We do offer that as an investment contract. It's only for accredited investors. It is offered as a security. We've designed this for sophisticated investors that understand [the risks]. There's a regulator that oversees it. There's a regulator that looks over the risk management, the operational controls, the custody, everything. So that is a regulated lending product.

That is radically different from something like Anchor Protocol where basically, it was just free money that was essentially paid for using luna tokens. I mean, it was a subsidy. So someone's basically saying, “Hey, give us your UST and we're going to subsidize you.” There's no one actually borrowing. On the other side of that, there were people pouring in stablecoins — no one actually wanted to borrow them. So they're just paying this yield and they're doing that out of the luna token. Terra is taking its own luna token and effectively using it to pay people a yield. That's like a Ponzi, like a subsidized free interest rate. There's no such thing. It's gonna burn out. That's where you can see the ticking time bomb.

What do you think will happen to the Terra ecosystem that backed UST and luna?

I mean, it completely just destroyed itself. I don't know. I don't even want to guess. I don't really care.

You mentioned that there's been more discussions about potential regulation. Has this crisis led you to any kind of new insight which made you think, “Okay, we shouldn't have done that,” or, “We shouldn't be doing this”?

I think one thing the regulatory discussions around stablecoin rules have focused on [is] defining what it is to be a dollar stablecoin issuer at a national level with an asset-backed stablecoin for running a payment system. That's been the focus, and I think that's been good.

What is now emerging is a recognition that what was being contemplated does not actually address the risk of these so-called algorithmic stablecoins. I think in some ways, what may be needed is statutory definitions. What is a dollar-backed stablecoin? How should it be treated under payments and banking law? What are the specific requirements for it, who's going to supervise it, right? At the same time, what are these other things?

Do you think this is the end of algorithmic stablecoins?

I don't think so. This is a category that I think for a variety of reasons kind of represents the holy grail. It’s like the fountain of youth or whatnot. There will be people who will continue to try and pursue this. It may be harder to pursue it now that it’s blown up on a global scale. But I don't think that we're done seeing efforts to produce these.

Why is it considered the holy grail?

Because for many people, cryptocurrency is a mechanism for storing value, moving value, that is decentralized. Bitcoin is clearly an example of a decentralized form of money that has global reach. [But] it’s difficult to denominate an everyday transaction in bitcoin. So the concept of having a stable-value digital currency, which is also decentralized, which is also not dependent on a centralized issuer, a government-regulated issuer, but that can hold $1 of value or whatever the reference asset is, is obvious.

People are talking about CPI indexed stablecoins, or things like that — that can hold the value but it's censorship-resistant and can exist without needing a centralized issuer or government regulatory intervention. That's what people are looking to accomplish.

Is that something Circle is studying or looking to move to eventually?

Not really. From a long-term perspective, I think the idea of synthetic global digital currencies is compelling, but that could be 20 years from now.

When we decided to build USDC five years ago, we really believed that the right place to start is with this hybrid digital currency model, where you take existing central bank liabilities, or government liabilities, like Treasury bonds or cash from the Fed or what have you, and tokenize it, create a digital currency form factor that can run on the public internet but that had the assurance and the interoperability with that existing financial system. That was the right place to start. That would be how mass society would adopt this first.

Last time we talked, you stressed the role of the private sector in the continued growth of blockchain, crypto and stablecoins. Now there's growing fear of an unregulated, unmonitored industry.

This is an amazing industry. I think it's one of the most dynamic industries in the world right now. I mean, if you just look at the sheer number of developers, creators, innovators, startups in Web3. It's staggering and it's growing at a really fast rate, and covering so many different applications and use cases and industries.

I'm more bullish than I've ever been on this space right now. I think the infrastructure is getting to a place where we can do internet-scale applications. The utility of this technology is expanding rapidly.

There is a very legitimate risk that regulators actually move too fast to try and define things because I actually think there's just still so much room for development and innovation right now.

SEC Chair Gary Gensler just testified before a House subcommittee, warning about heightened risks in the crypto market given what just happened. He’s been harshly criticized by the industry. What do you think of the way he has run the SEC?

I think Chair Gensler has certainly been consistent in what he says. There are obviously examples of people committing fraud or illegal conduct. They do continue to bring enforcement cases on those. And they're certainly bringing cases around areas where they feel the industry needs better definitions. For example, these yield products and lending products. They've brought some cases and they've sort of defined, “Here's what this is,” and that's caused this industry shift as a result.

I think there's other areas where there's a lot still up for debate. While he may have a very firm opinion about whether all tokens are securities or all crypto exchanges should be national stock exchanges or equivalent, there's a lot of disagreement about that right now. That's actively being debated and discussed in the House Financial Services Committee, in the House Agriculture Committee.

I don’t think one can just take for granted that because Chair Gensler believes something it is in fact what it is.

He’s been portrayed as an enemy of crypto.

I certainly don't think Chair Gensler is an enemy of crypto. I co-taught with him in a class at MIT. I've known him for some time. He's a very intelligent person. There's a lot of detail about this space and technology and so on and he can't necessarily himself understand all of it. So people are sort of saying, “Hey, you don't get this,” or, “You don't get that.” He's chair of the SEC. It's not his job to be the technical expert on everything.

You know, I feel like it's a two-way street. People have to lean in on both sides of things.

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.

After weeks of “unprecedented growth,” Bobbie co-founder Laura Modi made a hard decision: to not accept any more new customers.

Parents unable to track down formula in stores have been turning to Facebook groups, homemade formula recipes and Bobbie, a 4-year-old subscription baby formula company.

Nat Rubio-Licht is a Los Angeles-based news writer at Protocol. They graduated from Syracuse University with a degree in newspaper and online journalism in May 2020. Prior to joining the team, they worked at the Los Angeles Business Journal as a technology and aerospace reporter.

The ongoing baby formula shortage has taken a toll on parents throughout the U.S. Laura Modi, co-founder of formula startup Bobbie, said she’s been “wearing the hat of a mom way more than that of a CEO” in recent weeks.

“It's scary to be a parent right now, with the uncertainty of knowing you can’t find your formula,” Modi told Protocol.

Parents unable to track down formula in stores have been turning to Facebook groups, homemade formula recipes and Bobbie, a 4-year-old subscription baby formula company that sells organic, European-style formula modeled after human breast milk. The company has been “pretty massively” impacted by the shortage, said Modi, with sales and new customers spiking as demand increases. But Bobbie can’t meet that new demand amidst the shortage, and so Modi had to make a tough call: The company closed its subscription service to new customers and opened a waitlist. Current or former Bobbie customers can still purchase baby formula directly from the company or sign up for a subscription.

“I had to make a hard decision staring at supply versus demand and decide to prioritize our current customers over growing the business,” Modi said.

The baby formula shortage began in February when Abbott Nutrition, one of the largest suppliers of the product in the U.S., recalled several major brands of its powder formula after four babies suffered bacterial infections linked to products made in its Michigan factory. The company has since shut down that plant. The recall exacerbated ongoing pandemic-related supply chain issues in the baby formula market.

Abbott’s recall includes the leading powder formula products, including Similac, Alimentum and EleCare. Three other manufacturers, Mead Johnson, Gerber and Perrigo Nutritionals, are trying to keep up with the demand, but given the supply chain issues, there’s still a massive gap. According to Axios, 43% of baby formula is out of stock. For manufacturing, Bobbie partners with Perrigo Nutritionals, which is reportedly operating its facilities at 115% capacity and expects shortages to last through 2022, according to Reuters. Modi told Reuters that Perrigo is able to meet 100% of the company’s current needs.

“This [shortage] is a big wake-up call that we cannot ever be dependent on any one manufacturer or anyone's supplier to make sure that we always have the product on hand,” Modi said.Modi said Bobbie saw its customer count double in the first week following the recall. Weeks and weeks of “unprecedented growth” followed, she said. Limiting the company’s customer base was the best way to ensure reliability and a consistent supply of formula to those who have used Bobbie. The company’s more than 70,000 subscribers get bundles of four, eight and 10 cans of formula for between $114 and $285 per month.

“The decision was, rather than going out of supply, we’re going to give peace of mind to our current customers that if they started on Bobbie, they'll be able to continue on Bobbie,” Modi said.

The move to limit the number of customers who can buy Bobbie products is similar to policies of retail stores like Target, Kroger, Walgreens and CVS, which have placed limits on the amount that a person can buy in one transaction to prevent stockpiling.

Modi said the feedback to Bobbie’s decision has been overwhelmingly positive. After announcing the subscriber limit in an Instagram post last week, the company received an outpouring of support from parents who buy its products.

“I’m not even kidding when I say I cry happy tears every time that I open that can,” one customer said in an email to Bobbie shared with Protocol. “As long as I’m feeding a baby formula, I will always choose Bobbie. You have made a customer for life.”

The baby formula shortage is showing some early signs of easing: Abbott reached an agreement with the FDA on Monday that will reopen its Michigan factory. The company said baby formula could be available on shelves six to eight weeks after the production restarts, which would take two weeks, pending FDA approval. And on Wednesday, the Biden Administration invoked the Defense Production Act to boost baby formula production, as well as authorized flights to bring imports from overseas. According to the Associated Press, 98% of baby formula consumed in the U.S. is produced domestically. Bobbie didn’t say when it will reopen its sales, but plans to update its subscribers in June.

For now, Modi isn’t focused on growing Bobbie’s bottom line. Her goal is to give the company’s current customers a sense of relief knowing that Bobbie has them covered.

Correction: This story has been updated to correct the number of years Bobbie has been operating. This story was updated May 18, 2022.

Nat Rubio-Licht is a Los Angeles-based news writer at Protocol. They graduated from Syracuse University with a degree in newspaper and online journalism in May 2020. Prior to joining the team, they worked at the Los Angeles Business Journal as a technology and aerospace reporter.

To give you the best possible experience, this site uses cookies. If you continue browsing. you accept our use of cookies. You can review our privacy policy to find out more about the cookies we use.