Responsible Lab Episode 2: Behavioral Economics and Industrial Organization
Industrial Organization and Responsible AI
Great to be back! I have been catching up on my traditional research responsibilities as well as some new research and consulting platforms I am excited about sharing.
TRAIL Episode Two is live. Today, we will discuss the industrial organization of AI.
Since transparency and fairness are themes of this newsletter, I should note that I have presented my own research to Facebook multiple times, and have also been invited to give a talk at Google and also given independent feedback to various other research scientists in the industry and in academia.
Everything you read in The Responsible AI Lab newsletter is exclusively my own opinion and perspective and not that of any institution or organization named here.
Industrial organization is the field of economics dealing with the strategic behavior of firms within and across industries, antitrust policy and related phenomena. It is expertise every organization should be familiar with, but it remains a niche specialization limited to economists such as myself. This simply has to change because the stakes are too high. Hopefully this episode will help.
Due to various alleged social impact infractions, Facebook, Google, Apple and Amazon are all going through rigorous antitrust investigations.
With the future of AI as an industry in the balance.
It is increasingly difficult for smaller companies to enter the industries and technological spaces of these giants.
Qualitatively speaking, there are justified concerns that the barriers to market entry are prohibitive. There is an ongoing dialogue as to whether harm is being done.
For example, startup stakeholders reading this will be very familiar with funders suddenly withdrawing from deals because of the fear of a major tech company mimicking an innovation and driving the startup out of business. The dreaded (and often fatal) venture capital (VC) question is:
What happens if [insert top tech company] enters your space?
Of course, it’s an ironic that this comes up seeming everywhere from California to Canada because one would expect VCs to be more empathetic: the same tech companies also have VC arms which…
..haven’t seemed to driven traditional VCs out of the market at all. For example, Google Ventures was founded back in Spring 2009 and, while formidable, is not a VC monopolist by any stretch of the imagination. More recently, Amazon and Facebook have created funds of their own. Not many VCs are shaking in their boots at this, although startups may have more reason to be concerned. Allegedly, some tech companies are investing into VCs just to have data on startups, and, some speculate, potential competitors.
What are the stakes? To what extent is this a real threat, and to what extent is it merely psychological?
Even companies that are not primarily tech firms worry that partnering with a tech giant to be on their platform poses existential risks for them if they become too reliant on them over time.
Especially when they have more data about you than you have about yourself.
Policy makers are similarly concerned given the increase in public-private partnerships in AI and tech.
One way to think about it is that ultimately, governments don’t want to make their own lives harder by partnering with a monopoly that will create more economic and social problems for them to solve in the future.
Based on my significant experiences however, I believe most tech and AI actors would prefer to do the right thing. I believe that this is just as true for the vast majority of social sector organizations.
They usually just do not know how.
Are tech companies stifling competition by acquiring startups and potential competitors, and in so doing, reducing consumer and social welfare?
What can be done to minimize such outcomes? Does it have to be this way?
Economic and Social Impact
The main misconception many economic actors have is that economic and social impact are substitutes, so that having more of one means less of the other.
This outcome is, however, based on a limited understanding of modern economic and social science.
In reality, economic and social impact are supposed to be complements. Having more of one should translate into having more of the other.
There is no fundamental reason why several internal and external factors must structurally orient an organization in a direction that is negative for society just for short-term profits.
However, this is not commonly known. The incentives that are powering these unsatisfactory outcomes are even less understood by most, even many economists in tech companies.
What are some practical challenges?
There are several practical challenges represented by the economics of such cases.
In the case of Apple, for example, the policy concern is the App Store, a marketplace for mobile apps. The House Antitrust Committee found that developers were not treated identically, although Apple has continued to insist that the App Store is a level playing field.
It turns out that larger companies got deals that enabled them to pay less in commission or to have faster app reviews and dedicated personnel to their needs.
A similar argument holds for Amazon, whose marketplace often features its own products. Netflix Originals are in-house entertainment vehicles streamed on the same platform, in the same way as Trader Joe's sells its own food products within their own store.
However, this is not that different from Instagram carving out special deals for actors, musicians and other celebrities, or Netflix making special deals with established and blockbuster movie directors relative to deals cut with independent directors.
In of itself, this is what we economists call price discrimination, a selling strategy that charges customers different prices for the same product or service based on what the seller believes the customer would agree to.
Price discrimination is not malignant in and of itself. As such, the fact that a company charges users different prices is not likely to be sufficient to warrant holding that firm accountable. These issues are widespread and difficult to avoid in a market context.
The real harmful issue at stake being investigated in these cases, however is whether firms are deliberately suppressing competitors and competition.
It is one thing is a competitor is inadvertently harmed and another when there is an agenda to harm them.
How do we know whether or not competitors are being systematically suppressed?
This question is not as obvious as it might seem. In the old days of industrial organization in economics, this question would have focused on whether consumers were being charged prices that were too high. This is because a monopoly itself is defined as a firm that corners a market and charges unnecessarily high prices.
You can see how this criteria has not aged very well in the 21st century.
This price-based criteria goes out of the window since many tech platforms are free to use, but charged for with user data.
It’s not clear how to define anti-competitive behavior in this context. What is missing is a yardstick that cleanly identifies when malignant action is being taken.
A general rule building on the foundation of antitrust does not yet exist, and this is an area ripe for economics research.
This is a topic I am working on, as are many others.
The question is whether the difference in treatment arises to a level where the market is fundamentally altered from that of a level playing field.
At a minimum, I foresee all of firms and organizations implementing measures to ensure that innovations pass internal scrutiny prior to launch to avoid issues down the line.
All of these platforms have to monetize at the end of the day.
The issue is to ensure that growth is sustainable. For many firms, the legal costs tend to be minimal. However, the branding and reputational costs of an antitrust investigation far exceed any short-term profits for tech giants.
Are there any incentives of startups that policy makers need to keep in mind?
Selling out, or Buying In? The Implications for Policy Makers
One challenge policy makers face is that a startup is not forced to sell to a major tech company. Cisco, Oracle and many others have been absorbing near-competitors for many years.
The reason for acquisitions tends to be financial, clearly. What has been lost in the discussion thus far is that many VCs seem to be focused on backing startups that eventually sell out to top companies so they can make a significant financial exit. That is, it’s assumed that a company will sell out, not that they will compete in the long-run.
Economic success is being redefined before our generation as startups calibrate their expectations to line up with their funders.
This aspect has been under-explored.
I need everyone to understand that we are the point where startups are created with the exclusive goal of acquisition in mind.
This goal is literally one of the first questions many investors ask when meeting with entrepreneurs, because they tend to make their money when you do sell out. Right after you get past the screen of what you will do when a big tech company enters “your” market.
As such, a process and mindset change is going to be important, and cannot be assumed to occur by osmosis.
Notwithstanding, the need for increased competition for positive social impact is a well-taken point, although the incentives of startups should be given at least as much weight as that of incumbents. Because it is much easier and far more convenient to set a startup up to be sold to an incumbent in the short-term than to dominate a market over the long-term.
Assume that the major players left the scene today or were broken up. What if most of the newer upstarts still preferred to sell out for riches tomorrow than to build an empire in a decade? How can startups be motivated to stay the course? This is the question policy makers should be asking. What are some plausible solutions?
Are Acquisitions Self-Control Problems? A Behavioral Economic Theory
In economics, we have an important concept called the self-control problem. It’s one of the seminal ideas Richard Thaler won a Nobel Prize for. It’s exactly as it sounds: people have two selves. The first self is a far-sighted planner. This would be a new startup that intends to dominate their market. The second is a myopic doer. This is the same startup, but sells out to a buyer instead of taking over the market as originally planned. It’s not different from any problem where instant gratification is a temptation, and is related to general agency problems in firms.
How can self-control problems be minimized? With commitment devices. A commitment device is a tool that locks you into a behavioral change with either a reward or a punishment. One of the oldest illustrations is from the Odyssey, where Odysseus tied himself to his ship’s mast so he wouldn’t be entranced by the sirens’ songs. It’s like when you give your spouse your password to curb your internet addiction and get some reward or punishment depending on whether you survive the temptation or not.
Some kind of commitment device may be necessary to keep startups motivated in the face of competition to resist the urge to sell out, if the AI space is to reach its competitiveness potential.
Keep in mind that the longer you stay in the game, the more likely it is that everything could blow up in your face, leaving you with almost nothing. On the other hand, you could sell out and leave with less, but still a lot.
It's worth remembering that 90% of startups ultimately fail, and this is something entrepreneurs would rather not be a part of. Although startups are nothing if not heroic, they must not be romanticized.
Towards a New Industrial Organization?
At the very least, it seems that the antitrust definitions and tools of yore - the classic stuff of industrial organization - were written for an era that is no longer representative.
The economics profession is increasingly aware of this and is trying to adapt.
This is one of the most exciting developments in modern industrial organization.
Another complicating factor that startups face is that the lines between firms are so blurred, it's hard to tell where the boundaries between Amazon, Google and Facebook are, in terms of AI.
It's commonly known that the field consists of mostly the same research scientists and engineers playing employment musical chairs across the same firms, and only taking a break to make a startup then sell it to a former employer or one of their competitors. As such, the focus on antitrust must be combined with a focus on actually improving competition.
How do we keep the influence of giants to the level of merit and make it worthwhile for startups to stay the course and not sell out so that markets are more complete?