Kevin Kelly on the Web’s Past, Present and Future

Kevin Kelly writes a fantastic essay at Wired on the Internet circa 1995, 2005, and 2015. It’s an excellent read, so I won’t spoil all of the fun. However, two passages of particular note.

First, he hits the nail on the head with the following:

“What we all failed to see [in 1995] was how much of this new world would be manufactured by users, not corporate interests.”

This is so true. In the mid-1990s, we wondered how we could provide adequate incentives to users to get them to create useful content. We learned the answer–disaggregation would let good content surface, and we just need good filters to ensure the proper sorting of this grassroots-generated content.

His vision for 2015 is a little too cyborg-y for my tastes, but he predicts that the Internet will act as a giant coordinated Machine much like the human brain:

“Today the nascent Machine routes packets around disturbances in its lines; by 2015 it will anticipate disturbances and avoid them. It will have a robust immune system, weeding spam from its trunk lines, eliminating viruses and denial-of-service attacks the moment they are launched, and dissuading malefactors from injuring it again. The patterns of the Machine’s internal workings will be so complex they won’t be repeatable; you won’t always get the same answer to a given question. It will take intuition to maximize what the global network has to offer. The most obvious development birthed by this platform will be the absorption of routine. The Machine will take on anything we do more than twice. It will be the Anticipation Machine.”

I particularly like Kevin’s description of an “Anticipation Machine”–the system will get smarter, and we’ve barely seen the possibilities with today’s state-of-the-art technology.

I think Kevin’s predictions are consistent with two of the bigger policy issues I continue to harp on:

1) We need to ensure that regulatory policy fosters, instead of inhibits, consumer participation in content creation and other online tasks to “run the Machine.” This means strong laws, like 47 USC 230, protecting those who try to develop communities.

2) We cannot fully evaluate the consequences of any pathogen jeopardizing the Internet (viruses, spam, spyware, whatever) until the Machine has had a chance to respond fully. In the end, the Internet is far more resilient than we acknowledge, and it will organically self-correct many problems if we keep the regulators from screwing it up.