Five Ways That the California Age-Appropriate Design Code (AADC/AB 2273) Is Radical Policy
When a proposed new law is sold as “protecting kids online,” regulators and commenters often accept the sponsors’ claims uncritically (because…kids). This is unfortunate because those bills can harbor ill-advised policy ideas. The California Age-Appropriate Design Code (AADC / AB2273, just signed by Gov. Newsom) is an example of such a bill. To effectuate its purported goal of helping children, the AADC delivers a “hidden” payload of several radical policy ideas that sailed through the legislature without proper scrutiny. Given the bill’s experimental nature, there’s a high chance it won’t work the way its supporters think–with potentially significant detrimental consequences for all of us, including the California children that the bill purports to protect.
In no particular order, here are five radical policy ideas baked into the AADC:
Permissioned innovation. American business regulation generally encourages “permissionless” innovation. The idea is that society benefits from more, and better, innovation if innovators don’t need the government’s approval.
The AADC turns this concept on its head. It requires businesses to prepare “impact assessments” before launching new features that kids are likely to access. Those impact assessments will be freely available to government enforcers at their request, which means the regulators and judges are the real audience for those impact assessments. As a practical matter, given the litigation risks associated with the impact assessments, a business’ lawyers will control those processes–with associated delays, expenses, and prioritization of risk management instead of improving consumer experiences.
While the impact assessments don’t expressly require government permission to proceed, they have some of the same consequences. They put the government enforcer’s concerns squarely in the room during the innovation development (usually as voiced by the lawyers), they encourage self-censorship by the business if they aren’t confident that their decisions will please the enforcers, and they force businesses to make the cost-benefit calculus before the business has gathered any market feedback through beta or A/B tests. Obviously, these hurdles will suppress innovations of all types, not just those that might affect children. Alternatively, businesses will simply route around this by ensuring their features aren’t available at all to children–one of several ways the AADC will shrink the Internet for California children.
Also, to the extent that businesses are self-censoring their speech (and my position is that all online “features” are “speech”) because of the regulatory intervention, then permissioned innovation raises serious First Amendment concerns.
Disempowering parents. A foundational principle among regulators is that parents know their children best, so most children protection laws center around parental decision-making (e.g. COPPA).The AADC turns that principle on its head and takes parents completely out of the equation. Even if parents know their children best, per the AADC, parents have no say at all in the interaction between a business and their child. In other words, despite the imbalance in expertise, the law obligates businesses, not parents, to figure out what’s in the best interest of children. Ironically, the bill cites evidence that “In 2019, 81 percent of voters said they wanted to prohibit companies from collecting personal information about children without parental consent” (emphasis added), but then the bill drafters ignored this evidence and stripped out the parental consent piece that voters assumed. It’s a radical policy for the AADC to essentially tell parents “tough luck” if parents don’t like the Internet that the government is forcing on their children.
Fiduciary obligations to a mass audience. The bill requires businesses to prioritize the best interests of children above all else. For example: “If a conflict arises between commercial interests and the best interests of children, companies should prioritize the privacy, safety, and well-being of children over commercial interests.” Although the AADC doesn’t use the term “fiduciary” obligations, that’s functionally what the law creates. However, fiduciary obligations are typically imposed in 1:1 circumstances, like a lawyer representing a client, where the professional can carefully consider and advise about an individual’s unique needs. It’s a radical move to impose fiduciary obligations towards millions of individuals simultaneously, where there is no individual considerations at all.
The problems with this approach should be immediately apparently. The law treats children as if they all have the same needs and face the same risks, but “children” are too heterogeneous to support such stereotyping. Most obviously, the law lumps together 17 year olds and 2 year olds, even though their risks and needs are completely different. More generally, consumer subpopulations often have conflicting needs. For example, it’s been repeatedly shown that some social media features provide net benefit to a majority or plurality of users, but other subcommunities of minors don’t benefit from those features. Now what? The business is supposed to prioritize the best interests of “children,” but the presence of some children who don’t benefit indicates that the business has violated its fiduciary obligation towards that subpopulation, and that creates unmanageable legal risk–despite the many other children who would benefit. Effectively, if businesses owe fiduciary obligation to diverse populations with conflicting needs, it’s impossible to serve that population at all. To avoid this paralyzing effect, services will screen out children entirely.
Normalizing face scans. Privacy advocates actively combat the proliferation of face scanning because of the potentially lifelong privacy and security risks created by those scans (i.e., you can’t change your face if the scan is misused or stolen). Counterproductively, this law threatens to make face scans a routine and everyday occurrence. Every time you go to a new site, you may have to scan your face–even at services you don’t yet know if you can trust. What are the long-term privacy and security implications of routinized and widespread face scanning? What does that do to people’s long-term privacy expectations (especially kids, who will infer that face scans are just what you do)? Can governments use the face scanning infrastructure to advance interests that aren’t in the interests of their constituents? It’s radical to motivate businesses to turn face scanning of children into a routine activity–especially in a privacy bill.
(Speaking of which–I’ve been baffled by the low-key response of the privacy community to the AADC. Many of their efforts to protect consumer privacy won’t likely matter in the long run if face scans are routine).
Frictioned Internet navigation. The Internet thrives in part because of the “seamless” nature of navigating between unrelated services. Consumers are so conditioned to expect frictionless navigation that they respond poorly when modest barriers are erected. The Ninth Circuit just explained:
The time it takes for a site to load, sometimes referred to as a site’s “latency,” is critical to a website’s success. For one, swift loading is essential to getting users in the door…Swift loading is also crucial to keeping potential site visitors engaged. Research shows that sites lose up to 10% of potential visitors for every additional second a site takes to load, and that 53% of visitors will simply navigate away from a page that takes longer than three seconds to load. Even tiny differences in load time can matter. Amazon recently found that every 100 milliseconds of latency cost it 1% in sales.
After the AADC, before you can go to a new site, you will have to do either face scanning or upload age authenticating documents. This adds many seconds or minutes to the navigation process, plus there’s the overall inhibiting effects of concerns about privacy and security. How will these barriers change people’s web “surfing”? I expect it will fundamentally change people’s willingness to click on links to new services. That will benefit incumbents–and hurt new market entrants, who have to convince users to do age assurance before users trust them. It’s radical for the legislature to make such a profound and structural change to how people use and enjoy an essential resource like the Internet.
A final irony. All new laws are essentially policy experiments, and the AADC is no exception. But to be clear, the AADC is expressly conducting these experiments on children. So what diligence did the legislature do to ensure the “best interest of children,” just like it expects businesses to do post-AADC? Did the legislature do its own impact assessment like it expects businesses to do? Nope. Instead, the AADC deploys multiple radical policy experiments without proper diligence and basically hopes for the best for children. Isn’t it ironic?
I’ll end with a shoutout to the legislators who voted for this bill: if you didn’t realize how the bill was packed with radical policy ideas when you voted yes, did you even do your job?
Prior AADC coverage:
- Some Memes About California’s Age-Appropriate Design Code (AB 2273)
- An Interview Regarding AB 2273/the California Age-Appropriate Design Code (AADC)
- Op-Ed: The Plan to Blow Up the Internet, Ostensibly to Protect Kids Online (Regarding AB 2273)
- A Short Explainer of How California’s Age-Appropriate Design Code Bill (AB2273) Would Break the Internet
- Will California Eliminate Anonymous Web Browsing? (Comments on CA AB 2273, The Age-Appropriate Design Code Act)
Pingback: California Law Aims to Protect Children's Safety and Wellbeing Online - Atlanta Business Journal()