A brief introduction to what happened during the Web’s early-years and where it’s heading in the immediate future. Web professionals still have much to learn from history since most new creations are variations on very old solutions with a modernized surface.
We begin with a summary of the Web phenomenon. We will talk about what is special about the Web, unlike what the Internet offered before, and the Web generations we’ve moved through (Web 1.0, 2.0 etc.). The Internet and the Web should not be lumped together, as they so often are. The Internet is the infrastructure, network, and all connected equipment that can talk with each other. The Web is strictly a service among many others that use the Internet as a communication network. To explore the Web, we use web browsers which send and retrieve information over the Internet.
Because of the Web’s inclusive and unguided nature, most of us stumble upon sites that are at different stages of development, or have yet to embrace newfound design conventions or trends. Some first generation websites still exist and perform well enough if they have a simple purpose. Now we will go through the evolution of the Web since its creation atop the Internet.
Web 1.0 – a network of documents
While the Internet has been around in some form since the 60’s or 80’s (depending on the definition), the concept of the Web came into being in 1990 when Tim Berners-Lee and Robert Cailliau wrote a proposal about the WorldWideWeb for their employer, the research organization, CERN.
They wanted to use hypertext:
“…to link and access information of various kinds as a web of nodes in which the user can browse at will…”
Hypertext is more than plain text. Hypertext is a building block of HTML (HyperText Markup Language), which Berners-Lee designed to layout text content on a web page – and which now forms the basis of every website. Over time, HTML has developed into a robust way to present rich media on the Web.
In 1990, Berners-Lee released the WorldWideWeb browser – the only way to view the young Web.
In 1993, the NCSA1 released a more capable browser, Mosaic, which popularized the Web in many fields. In 1994, Netscape Navigator (based on Mosaic) was released and became the dominant browser even outside academic circles.
There were plenty of services available via the Internet before the web was born, but it was harder to find and use such services owing to the specialized software required, steep learning curve, and the many different standards, formats, and protocols.
Some example Internet services:
- Gopher – found and retrieved documents and information. Could take minutes or even hours.
- Name/Finger – reported the contact details about colleagues and showed if they were online in the workplace.
- FTP (File Transfer Protocol) – enabled you to upload and download documents to a server – allowing you to publish information on the Internet, and collaborate with colleagues. Imagine Dropbox but with no syncing, no back-up, and a clunky interface.
- Email – colleagues were emailing each other in the 60s, but you could only email ‘anyone in the world’ in the following decades, depending on your email provider.
Web 1.0 builds on all these earlier services. The big difference is that the Web is further developed, offers standardized approaches to interacting with information, and is more accessible to the public.
Characteristics of Web 1.0
Through the linking of documents (pages), the Web appeared as an electronic library to anyone with access to a computer and a phone line. Instead of librarians, link directories (Yahoo!) and early search engines (WebCrawler, Lycos, AltaVista etc.) helped you get around. In the late 90s, Web-based services provided an easier interface for email (Yahoo!, Hotmail, etc.) and so email became more popular outside work and academic contexts.
Web design 1.0
In the introduction, I mentioned the word generations rather than versions. This is because the websites of different generations are living side by side, as not all embrace newfound design conventions, or even update old pages published long ago.
Things rarely seen on later generations of the Web include:
- Poor typography – Centered body text is hard to read, and the reader can get lost at each line break. Many sites also had serif typefaces (fonts where the letters have ‘heels’), which looked quite smudged on the low-resolution screens of the time.
- Visitor counters (publicly visible) – Some argued that they wanted to show the visitors that someone, preferably many, had been there before.
- Outgoing links – Webmasters loved to link to other websites, but links were often irrelevant to the subject matter, and more about making friendly suggestions. For example, it was common for websites to link to AltaVista, which was the giant of search. Various link exchanges were made between sites to drive traffic to each other; these ‘web rings’ helped websites get traffic before Google existed.
- Background imagery – Often it was a tasteless choice of wallpapers, a photograph, or illustration, which only made the website more cluttered and hard to read.
It was also common that the sites were made for specific screen sizes, designed with tables, and functioned differently in different browsers. Not forgetting all the background sounds, which began anew at each page view, and the completely pointless animated icons.
In the late 90s, sites built entirely with animation technology appeared. Some had an extraordinary focus on animation and innovative design, but the usability and content was not that much of a priority.
Web 1.0 was amateurish since too few were qualified to design user-friendly websites; it was an immature industry. The sites were usually not versatile enough to adapt to the visitor’s needs or technical preconditions.
Personally, I am grateful that the website I built in 1998 about Egyptian mythology is gone from the web service Tripod. It contained a lot of design I would be equally embarrassed and amused if anyone saw. Centered gray text on the messy black background, and the devil with a fire iron as an animated GIF in the footer – for no reason.
The next generation of the Web raised standards and expectations. Designers and publishers began to focus on the user. The business community made an effort to understand the Web, but began with an economic bubble.
Web 2.0 – the engaging web
The concept of Web 2.0 was coined at the turn of the century and describes the participatory Web, where it’s easier for people to publish, share, comment, and socialize online, not only through their own blog, but using other people’s websites. Web standards were developed to shape the Web, including RSS (Really Simple Syndication) that allowed people to subscribe to content without disclosing their email addresses.
So-called mashups emerged, the reuse of one or more external services to create something new. Embedding information from elsewhere became common, such as including content from Google Maps or Youtube on your own website. The Web was growing so much that it became an economic incentive to focus on user experience. More often, we would find websites were thoughtfully designed.
At the turn of the century, every third person in the developed world had access to the Internet. Most were able to take advantage of the Web’s content. Globally, 7 % had access to the Internet.
Source: ITU (International Telecommunication Union – an agency of the UN)2
Because of the newfound opportunities to make money on the Web, publishers of well-designed websites began to measure and analyze how visitors used their sites. These measurements were used to optimize the navigation structure, simplify payment systems, monitor the popularity of news / content, and much more.
Along with the social possibilities, Web 2.0 also became more functional, which is noted by the ever-decreasing need to install software on a computer. Many office applications have been replaced or augmented with online services accessed through your browser.
During this period, the Web reached beyond the desktop. Mobile ‘broadband’ launched, and an increased use of laptops, mobiles, and tablets changed the perception of when, where, and how to use the Web. From being tethered to a connection at home, work, school, or at an Internet cafe – to being connected in any environment. Even when running for the bus with a cheese sandwich in your hand.
Between 2007 and 2013, half the Web’s traffic became consolidated around just 35 sites. Some people now consider Facebook to be the Web, never going anywhere else. This is especially the case when considering mobile access.
Source: The Connectivist3
Characteristics of Web 2.0
With the advent of what is sometimes called the social web, people needed fewer websites to connect to one another in greater numbers. Examples include Myspace, Wikipedia, Youtube, Spotify, Twitter, Facebook, and WordPress. With this centralization of users, it was possible to allow people to log on to different sites using the same credentials, such as when you log on to a music streaming service using one of your social network identities.
Large, successful sites – often fueled by user generated content, could offer huge amounts of diverse content, allowing people to spend more time without leaving. Think how easy it is to fall into the wiki-hole when you visit Wikipedia for one specific item, and end up reading unrelated and possibly bizarre articles. Similarly, Youtube offers tempting related and ‘up next’ videos to keep people watching.
Before sites offered so many suggestions and related links, people navigated in a more linear fashion to achieve a goal.
Something rarely noted about the second generation of the Web is that users actually make fewer mistakes and rarely need to start over (perhaps by returning to the home page as in Web 1.0). This improvement in user experience was made possible by the refinement of web technology. In the past, search engines would not offer keyword suggestions, leaving you to start afresh or refine your own terms. Nor were ever-updating news feeds or notifications available, as you now have on Facebook and Twitter. Back then, you had to reload the page to see if anything had changed.
Web design 2.0
Websites, and web designers, brought many elements that we now associate with Web 2.0, such as:
- Floating advertisements. Ad servers were often overworked and slow to deliver ads, making page load times painfully slow. Some people installed ad blocking browser extensions.
- Flash. Often used for intros before a visitor was allowed entry to the home page, and for adverts. Some people removed (or never installed) the Flash plug-in to avoid these bandwidth-heavy animations.
- Search suggestions. Search engines offered related terms to your query, often derived from what other people were searching for.
- Search fields placed at the top of the page. Not newfound knowledge, but usability testing confirmed that many people prefer to search for what they want rather than browse. It became the convention to place the search field at the top of the page so that it was more visible, and easy to find when people gave up on the navigation menu.
- Navigation based on the visitor’s history. By tracking the individual visitor’s activity, some sites provided shortcuts and suggested links. For example, Amazon’s homepage showcased books that the individual had looked at during previous visits.
- Streaming video. While quality and bandwidth had to be considered, Youtube had proved that rich media was popular. Nobody expected to download a video anymore, everything was instantly streamed.
- Maps. It became common for contact information to be complemented with a functional map, most often from Google Maps.
- Social feeds. Embedding content from other websites (video, maps, etc.) included showing the website’s own Facebook or Twitter feed. Buttons to follow, retweet, like, and add to your browser’s favorites or to a read-later service abounded. The buttons just would not stop!
Web 3.0 – a network of data (also known as the semantic web)
The semantic web refers to how the meaning of information is recorded and presented in ways that computers can understand. Computers, including your laptop and smartphone, can understand that a series of numbers is actually a date. The semantic web adds a layer of metadata to tag information with contextual meaning. The name of a city can be tagged with its geographical location, and the travel routes to it from your personal location.
In 2008, 61 % of the population in the developed world were connected to the Internet and 23 % globally.
It is still highly debatable when the third generation of the Web began. If we include the use of Microformats, which is a markup tagging framework to describe information within web pages and other web services, it will be around 2008.
The definition of Web 3.0 is not agreed upon. Some believe that we are already there, that what we experience online every day is Web 3, while others think it is far down the road. Some people say that no one will notice it, that nothing has changed – the same old web just got a little better. Whatever the state of Web 3, it is about knowledge, meaning, and relevance based on the user’s personal perspective and individual needs. It includes personalization, whereby the service adapts to the user based on explicit preferences and past activity, so as to be more relevant.
Web 3 techniques are already well-used by many sites and online services. Standards from Microformats.org and Schema.org help describe what specific information is about in a way that browsers, search engines, and other web services an understand. The semantic web offers digital publishers a way to mark information’s geographical origin, contact information, reviews and much more. Even more interesting is what is called linked data. It boils down to the Web being a collection of open databases that follows standards to such an extent that different databases (owned by different organizations) can be accessed and the data combined to create new knowledge, or at least insight. When information is well structured according to known standards, it can be reused by other services – creating new value and benefits for end-users.
Glossary – augmented reality
Enhanced or altered reality; the technology adds information from the Internet as a layer on top of the physical reality through visual displays, tactile technology, bracelets etc. For example, a smartphone app uses the phone’s location to show the direction to local amenities as a person uses their camera to look around.
Future services should be able to present information in ways beyond the creator’s control and expectations. The mash-up of the original information with the presentation method for the individual’s contextual needs creates something unique. It can be a HUD (head-up display) – screens embedded in a pair of glasses overlaying what you see with information in a context-aware manner.
Classic search technology becomes a problem on a small screen – worse when there’s no screen. The third generation web makes technology and information more adaptable and usable in reality – away from a keyboard. In a short while, you will no longer automatically reach for your mobile or sit down at your desktop computer for specific everyday information as you will already be well informed via your wearable tech and smart-objects around you. You will search for information less often, because services will notify you when there’s relevant information, amenities, and friends / colleagues around you.
Characteristics of Web 3.0
Many are no doubt familiar with the geo-social perspective of data, that information is geographically marked so a user can fetch information created nearby. This is what the geo-service Foursquare is all about; everything is based around location. For example, online shops know so much about you and your shopping habits, that you will receive targeted offers based on who you are, where you are, and everything else that can make the content more appealing to you. Some shops use personalization features that suggest the purchase of sandals, instead of winter jackets, in December if you happen to find yourself temporarily in Australia.
Perhaps the most important feature of Web 3.0 is that services now can be more precise in guiding their users to sought after or relevant knowledge. Wolfram Alpha is an example of a ‘search’ engine, often called an answer engine, which tries to generate correct answers to questions rather than a list of information resources. You don’t search the web with Wolfram Alpha, you challenge it to work out problems or provide statistics. It does this by sourcing results from structured knowledge that is machine-readable – semantic. It also goes further, by combining facts and computing new results. Many do not know that Wikipedia also organizes information in a similarly structured way.
Web design 3.0
It may still be a bit early to provide examples of truly third generation websites. Sometimes, you will see search results that have a creepy relevance to you as an individual. Google Search’s Knowledge Graph (sometimes shown on the right of search results), shows supplemental information scraped from various sources.
When Wolfram Alpha answers a question about the place once called Danzig, it refers the user to the new name Gdansk. It assumes the user meant a place but states that it knows other facts, that Danzig is also the name of an artist, and a music album. There is no conventional search but rather it explores a well-structured body of knowledge. Compare this scenario with the millions of hits you get on Google search with the same question (and note that Google only hints at the name change).
Third generation web design will probably be characterized by websites with relevant (to the visitor) content placed right in the spotlight. Over the short and medium-term, more sites and online services will access the vast amounts of structured data, enabling cross-linking like search engines and Wikipedia, and contextual, personalized notifications. Web 3.0 is where information can finally live its own life, freed from its containers.
The Web is an information rich platform, not just a social place to comment on videos of cats. To develop a modern website, you need a deliberate information architecture, which is our next topic.