Today, Sir Tim Berners-Lee (Wikipedia) will get together at the CERN particle physics lab with friends and associates to celebrate the 20th anniversary of the World Wide Web.
On March 13, 1989, Berners-Lee submitted a plan to management on how to better monitor the flow of research at the labs. He submitted his paper “Information Management: A proposal“.
In 1989, CERN was the largest Internet node in Europe, and Berners-Lee saw an opportunity to join hypertext with the Internet.
By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP), the HyperText Markup Language (HTML), the first Web browser (named WorldWideWeb, which was also a Web editor), the first HTTP server software (later known as CERN httpd), the first web server (http://info.cern.ch), and the first Web pages that described the project itself.
The browser could access Usenet newsgroups and FTP files. However, it could run only on the NeXT; Nicola Pellow, therefore, created a simple text browser that could run on almost any computer.
Today Sir Tim Berners-Lee is working on Linked Data Standards and the Semantic Web. He explains his vision of the Semantic Web:
I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A ‘Semantic Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The ‘intelligent agents’ people have touted for ages will finally materialize.
RDF is the cornerstone of The Semantic Web, yet few commercial RDF apps exist, notes ReadWriteWeb. But the web has proven popular. Today the Indexed Web contains at least 25.42 billion pages (as of Friday, 13 March, 2009).
The Reliable Adaptive Distributed Systems Laboratory (RAD Lab) at the University of California, Berkeley, is sponsored by the likes of Google, Microsoft and Sun. Armando Fox and Dave Patterson, talk to Nodalities about their recent paper; Above the Clouds: A Berkeley View of Cloud Computing, intended to offer a coherent view of the opportunities being created now and in the future.
Tim O’Reilly believes that “the future belongs to services that respond in real-time to information provided either by their users or by nonhuman sensors.” Such services will be attracted to the cloud not only because they must be highly available, but also because these services generally rely on large data sets that are most conveniently hosted in large data centers. This is especially the case for services that combine two or more data sources, e.g., mashups.