Web 3.0 – Pictures in the Semantic Web

Web 3.0 – Pictures in the Semantic Web

Some years ago Web 2.0 was coined as a term for the new Internet. By definition it is not a technological improvement, it was a metaphor for the phenomenom of the web becoming social. Now some of the experiments around the new Web 3.0 are starting to get pretty impressive. Where the Web 2.0 described the Internet as social, the Web 3.0 describes it as semantic.

Firstly it needs to be said that Web 3.0 has nothing to do with the third version of the Internet. Even though the 3.0 could be mistaken for a whole new version, as we could be tempted to believe if we perceive it the same way we perceive a new version number in the computer programming terminology, it is just a metaphor. It is more linked to the usage and possibilities on the web rather than to the software. The web consists of all kinds of users and technology, so the Web 3.0 is the general trend, rather than the software and the physical hubs, switches and computers.

Web 3.0 introduces a new way of understanding data on the Internet. We as humans and users interpret the data in our own way, but a lot of the services we use, including search engines and hypertext references, are interpreted by computers before we are presented with the results. One example is how Google Search works. When you type in a text string for “horse”, for example, the search engine is limited by technology to search among pages where the word “horse” is found, or in images where the meta-text includes the text string “horse”. This gives us a limitation of possibilities to use the computer power to help us since not all pictures of a horse hold that information. Another problem is seen in presence of different languages: if you want a picture of a horse, you will be restricted to find this where the meta-text is “horse”, even though there is a good chance that a picture in which the meta-text is “Pferd” or “hest” represents the same animal.

Some improvements have been made. In Google’s Picasa 3 and Apples iPhoto the system can now recognize faces in pictures, and once you have named the faces it will automatically recognise these friends’ faces in your other albums. A new feature is the support between the photo gallery and other services such as Facebook, which means your photos are auto-tagged as you upload them.
When you also add geographical data, either by inserting the locations yourself into the software or the camera’s GPS doing it for you, the computer knows where the photo has been taken. By having the geographical data, digital technology can be used to match the pattern in the photo to a digital photo canvas such as street view.

Below you can see a great video explaining how this works. At a presentation held at TED, Blaise Aguera y Arcas from Microsoft explains the new features in Bing map.

http://www.ted.com/talks/lang/eng/blaise_aguera.html

Leave a Reply

Your email address will not be published. Required fields are marked *