This is the published version of Forbes’ CEO newsletter, which offers the latest news for today’s and tomorrow’s business leaders and decision makers. Click here to get it delivered to your inbox every week.
On Friday, the Supreme Court ruled that judges can decide for themselves on regulatory policies that are not explicitly detailed in federal law, essentially dismantling some of the regulatory power federal government agencies have had for decades and overturning a 40-year-old precedent.
The precedent, known as the Chevron Doctrine, has been the backbone of thousands of federal regulations put in place by the executive branch of the government through the years. It said judges should defer to government agencies and their regulations when Congressional law is ambiguous. In the majority opinion, Chief Justice John Roberts wrote the longstanding precedent “defies the command of” the Administrative Procedure Act—which governs how agencies operate—because that says that courts should interpret statutes.
In her dissent of the opinion handed down on Friday, Justice Elena Kagan decried the court’s majority’s decision to take regulatory authority away from executive branch agencies, which are subject matter experts. “In one fell swoop, the majority today gives itself exclusive power over every open issue—no matter how expertise-driven or policy-laden—involving the meaning of regulatory law. As if it did not have enough on its plate, the majority turns itself into the country’s administrative czar. It defends that move as one (suddenly) required by the (nearly 80-year-old) Administrative Procedure Act. But the Act makes no such demand. Today’s decision is not one Congress directed. It is entirely the majority’s choice.”
Regulation is a double-edged sword. Regulations can be expensive and difficult for businesses to keep up with, but they have also been credited with keeping people safe and ensuring business dealings are competitive and fair. Regulations can also be updated as time goes on and standards change without having to go through the much longer legislative process. For example, if a federal statute passed in 1970 codified a fine for noncompliance, that amount likely would not be much of a deterrent today. And if detailed laws about AI or cryptocurrency were put on the books now, they likely would not be relevant in a few years’ time.
How this decision will affect federal government regulations—and in turn businesses—is yet unknown, but it will impact regulations that every business has abided by. It is not immediately retroactive and for now, only applies to the case that was argued.
Forbes senior contributor Howard Gleckman writes this could hamstring how the IRS looks at tax policies. Politico runs through some other areas where existing regulation and policy could be wiped out, including student debt relief, health care, AI policy, protections for transgender and pregnant individuals, net neutrality, pollution and forcing competition for lower food prices.
In its written brief for this case, the federal government wrote that invalidating the longstanding Chevron Doctrine could bring a “potentially destabilizing result.” As inflation continues to tick down and stock markets continue their upward climb, it appears the economy has been getting to a more stable place. Time will tell if that continues.
AI regulation in general is a hot topic that is being discussed around the world. I talked to Intertrust CTO Dave Maher, who is also a member of the U.S. AI Safety Institute Consortium under NIST, about the prospects for action to set AI policy in today’s Washington, D.C. An excerpt from our conversation is later in this newsletter.
ECONOMIC INDICATORS
Inflation is slowly decreasing, but it’s not yet to the point in which the Federal Reserve is likely to cut interest rates. The personal consumption expenditures index, which is the Fed’s favored inflation metric, was at 2.6% in May, which is the lowest level since March 2021. This figure is moving down slowly—it was 2.7% in April—but still quite a ways off from the Fed’s long-term inflation target of 2%.
While there has been broad hope that interest rate cuts will happen in the coming months, Fed governor Michelle Bowman said last week that she doesn’t expect lower rates until “future years.” At a question-and-answer session at London think tank Policy Exchange, Bowman said, “I remain willing to raise” rates “should progress on inflation stall or even reverse.” Bowman is considered one of the more hawkish members of the board, but as progress on inflation reduction has slowed, predictions of rate cuts this year have also been slashed.
Sticky inflation is also keeping the consumer confidence level down. Figures released last week by the Conference Board show that elevated prices for food and groceries continue to have a negative impact on how people see the economy, causing their consumer confidence score to dip to 100.4 from 101.3 in May. Their expectation index—based on short-term outlooks on income, business and labor market conditions—is also down this month compared to May. This index has been at a level below 80, which often signals a coming recession, for five months. (According to S&P Global figures, there’s a 25% to 30% chance of a recession in the next 12 months—more than regular times, but it is not imminent.)
NOTABLE EARNINGS
The last week has been both exhilarating and brutal for several publicly traded companies—both in the tech sector and outside of it—with all-time highs and near record lows throughout.
FedEx had one of the best days in company history last Wednesday. Its earnings report beat analysts’ projections on revenue, profit and full-year earnings guidance, but what pushed it over the top was news of a potential spinoff of FedEx’s lower-load road delivery service. This segment of FedEx’s business is highly profitable and is likely to secure a higher relative valuation. JPMorgan analysts upgraded their recommendation on the delivery service’s stock to “buy,” which many investors did. FedEx stock was up more than 18% compared to the week before on Friday.
Meanwhile, Nike shares plummeted more than 20% on Friday, giving the athletic apparel and equipment brand its worst day in 44 years on the stock market. Nike reported a 2% decline in quarterly sales, but also warned that a 10% year-over-year decline was expected in the coming fiscal year, with most of that coming from slumping demand in China. Nike is just one athletic brand that’s seeing revenue declines, writes Forbes senior reporter Derek Saul. Shares of Lululemon are down 17% over the last three years, while Adidas is down 26% and Under Armour is down 68%.
Tech had a much better week, as Nvidia climbed back into the $3 trillion club. The chip company had its annual shareholder meeting last Wednesday, and while there were no big announcements, CEO Jensen Huang said Nvidia is now reaping the benefits of investing heavily in AI a decade ago. Amazon also hit a new high last week, with its market cap exceeding $2 trillion for the first time ever last Wednesday. The impetus for Amazon’s rally may have been its retail division, since the dates—and some of the big deals—for their Prime Day retail bonanza event next month were announced.
HUMAN CAPITAL
Several new state laws governing workplace policy will take effect on July 1. Forbes senior contributor Alonzo Martinez runs down some of the consequential ones. California employers will need to develop and implement a comprehensive workplace violence prevention plan. A new Colorado law will prohibit employers from seeking any information that might reveal an applicant’s age during the initial process. In New York City, bilingual posters informing workers of their rights will need to be hung in all workplaces, and workers will be given the city’s “Workers’ Bill of Rights” on their first day at a new job. South Dakota’s new law clarifies what employers can and cannot do about employees’ medical marijuana use. And Florida, Oregon and Texas have new data privacy laws that give consumers more of a say in what kinds of information businesses are allowed to collect.
TOMORROW’S TRENDS
Intertrust CTO Dave Maher On The Likelihood Of AI Regulation In Washington, D.C.
AI regulation has been a hot topic for governmental entities around the world. There’s been movement in Congress toward legislation to begin the process of setting guardrails to determine how AI should be used, including a hearing in April on a bipartisan Senate bill known as the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act. I talked to Intertrust CTO Dave Maher, who is also a member of the U.S. AI Safety Institute Consortium under NIST, about the NO FAKES Act and AI regulatory issues in Washington, D.C. This conversation has been edited for length, clarity and continuity.
It’s always hard to predict what is going to happen in Congress, especially in an election year. What do you see as the appetite for something to happen with the NO FAKES Act or anything else that regulates AI technology?
Maher: I think there’s a few provocative things, like generating fake content that is really deceptive in the elections. They may be foreign governments who can use AI, and they’ve already figured out how to intrude on our elections by distributing content on social media and things like that in very subtle ways. Some of the fakes, so to speak, are such that they may not directly lie, but they may be a little bit more subtle. I think that, nonetheless, is going to be very provocative.
I think the other place that is going to come up on the horizon is when AI is used in conjunction with targeting people in things like marketing decision making, decisions regarding who’s insurable or what their insurance rate should be, where AI can really amplify the asymmetric advantage that larger corporations have over individuals. AI is just a big amplifier. I think when people figure out that they’re at a far greater disadvantage in contending with big corporations in the case of marketing, but also in contracts and things of that sort, mortgages, very subtle redlining. All of those kinds of things, AI can aid in what we think of as nefarious practices.
Those things are going to surface, and they’re going to be pretty daunting. Researchers worry about things like AI growing to have an advantage over humans and deciding that humans are superfluous, so they take over the world or control us. I don’t worry about that kind of stuff. I worry about the kind of stuff that we were just talking about.
In this hyper-partisan political climate, where does AI regulation fit in? Is it a partisan issue or a human one?
So far, I have seen it as a human issue and not partisan. Traditionally in the past, Republicans were anti-regulation and Democrats were for more regulation. But I have seen and heard a number of Republicans, even in this day and age, who worry about the power that AI can represent and yield to people. Whether or not that’s going to remain is another story. I’m cynical enough to believe that if very powerful people who have a lot of money start telling politicians, ‘You don’t want to regulate AI in that way,’ they’ll find a way of avoiding certain kinds of regulation.
It’s really too early to tell. In the case of risk evaluation in the NIST AI Consortium, we’re still sort of organizing, trying to figure out how we prioritize different kinds of risks. It’s almost all [language-related] content. Not so much in certain types of AI decision making that’s really important for things like distributed energy systems—where you’ve got different stakeholders, consumers versus distributors versus power generators, all who have different kinds of stakes in how AI would be used to optimize the way the systems operate. I don’t see anybody organizing yet to figure out how do you regulate that kind of thing. It’s kind of a multiplication of issues.
AI content generation is a big enough issue, and that’s what we’re focusing on [with] NIST. I think that wherever somebody can get an advantage to maintain power, you’ve got to at least expect that regulation will be opposed. I’m really looking at it from the point of view of how, and for whom, AI amplifies power.
What do you see the path to regulation looking like?
I really don’t see how that’s going to work very well. A lot of people have looked at what’s happened in Europe. The European approach starts kind-of like the U.S. approach: It takes risk management as a starting point, but even the language that people use to describe it is a little bit more prescriptive. In the U.S., the risk management framework that NIST is using has the terms govern and identifying risks, mapping them, measuring them and managing—none of which are very heavy-handed. It’s kind of like, let’s see if we could be wise about all of this. In Europe, we’ll start there, but immediately go into looking to prohibit certain types of AI practices that are deemed to be dangerous. Like [Europe’s General Data Protection Regulation law] was a little bit more heavy-handed. Europe may, in some ways, lead the way, and that may be a good thing.
Europe’s also more aggressive and requires a conformity assessment for AI systems. What it’s going to have to conform with still has to be determined, but they’re basically going to require a lot of these things, while in the U.S., people talk about best practices and things of that sort.
The kinds of things that I’m looking for personally is a lot more transparency. We have to think of more human-centric approaches. When I’m encountering an AI agent of some sort that’s interacting with me, I want to know its nature. I want to know who’s responsible for it, what its pedigree is, how I can trust it, and what I can trust it for. I think that’s the kind of thing that we can reasonably get from an infrastructure for governance, which we’re going to need to put together, both in Europe as well as in the U.S.
What I think we’ll need to do is have a combination of legislation on enforcement and technology, come together to figure out how to deal with more dangerous situations. I can’t see how technology is going to solve the problem, or prescriptive legislation. You can have technology that’s going to be imperfect, but what it can do is help eliminate abuses by making the abuses more clear.
FACTS + COMMENTS
After a year of successes, Elon Musk’s SpaceX is planning an insider share sale in a tender offer, Bloomberg first reported last week. The company also signed a deal with NASA to guide the International Space Station back to a non-populated area of Earth for its disposal after 2030.
$210 billion: Approximate valuation of the company at the share terms that were reported
$843 million: Amount NASA will pay SpaceX to deorbit the ISS
‘Supports NASA’s plans for future commercial destinations’: What Ken Bowersox, associate administrator for Space Operations Mission Directorate at NASA headquarters, said the contract with SpaceX does
STRATEGIES + ADVICE
There’s a lot of talk about which jobs generative AI could replace, and Forbes senior contributor Jodie Cook asked ChatGPT that question. Here’s what it said, and how those predictions square with reality.
Stress often goes hand-in-hand with leadership. Here’s how Elon Musk deals with it, according to posts on Quora from his ex-wife.
VIDEO
QUIZ
A judge ordered the NFL to pay $4.7 billion last week. Why?
A. It conspired with teams and DirecTV to make it more expensive to watch out-of-market games
B. To cover damages for former players diagnosed with CTE after playing in the league
C. It allowed companies other than Wilson to print the words “Official NFL Football” on their merchandise
D. It failed to provide sideline cheerleaders with protective equipment
See if you got the answer right here.