What is the semantic web?
Another idea from the inventor of the web, but what does it mean for the rest of us?
The semantic web focuses on data rather than on documents, making it a much more immersive and detailed way of accessing information compared to the World Wide Web invented by Tim Berners Lee in the late 1980s.
However, Berners-Lee still played a very important role in its inception, developing the idea alongside AI researcher James Hendler and computer scientist Ora Lassila. The idea was first revealed in 2001 in a Scientific America article, where the threesome discussed the idea of connecting information using a network that could be read by machines.
According to the World Wide Web Consortium (W3C), the semantic web is "a common framework that allows data to be shared and reused across application, enterprise, and community boundaries".
The concept is to offer people the information they're looking for at the time they need it. One of its key philosophies is that although the information presented on the internet is useful, it's not always needed at every point.
Because the majority of data is created using forms and then converted into HTML, there's no way all data can be managed by everyone at all times. The semantic web makes this information more useful to everyone because it can be repurposed.
The semantic web essentially allows for the connection of information using a network that can be easily read by machines whether computers, IoT devices, mobile phones or other devices commonly used to access information.
It's built on the premise that data within web pages is useful, but not in all circumstances. One of the biggest hurdles of the internet as it stands is that the majority of data is created using forms and there's no unified way of publishing data so anyone can manage it. The way data is presented using HTML can be difficult to handle and so the semantic web takes the idea that if this data can be re-purposed, it's more useful to everyone.
Schema.org has been formed by a number of organisations (notably Google, Bing and Yahoo) to boost the extent of semantic metadata. The goal of this is to answer questions from the best sources on the web, rather than serve up a search page full of document links.
The most important part of semantic web technologies is Resource Description Framework (RDF). This is a common framework for describing resources. It can represent metadata that can be parsed and processed by systems rather than just displayed to users.
The semantic web is very useful for solving many of the problems raised with the World Wide Web.
For example, data siloes can be pretty much eradicated, with links between data and the outside world - or even more localised places such as within the organisation - operating seamlessly. The use of semantic metadata tags means the information can all reside in one place, searchable by the tags to make it infinitely easier to uncover.
If the linked data is in a publicly searchable place, such as the wider internet, users are also able to find intricate relationships between the data and the information they own, opening up its meaning to go far beyond the scope of the original data.
Another great use for the semantic web is for media management. For example, the BBC used the semantic web to power its database of player information during the 2010 World Cup and a lot of its website runs on semantic web technologies to make sure it can rapidly update and organise the vast amount information it holds.
In supply chains, the semantic web can be used to keep the rapidly changing data organised, whether that's information supplied by different parts of the chain, such as manufacturers, vendors, distributors, logistics firms, supply chain managers.
One of the key benefits of the semantic web is having large amounts of data, knowledge and information made understandable and accessible to machines, especially artificially intelligent bots, virtual assistants and agents.
The simplicity of the RDF data structure and the schema's optional nature mean that it's easy to combine different sets of data. This is particularly useful for big data projects where the variety of data within an organisation can present a challenge.
Unlocking collaboration: Making software work better together
How to improve collaboration and agility with the right techDownload now
Four steps to field service excellence
How to thrive in the experience economyDownload now
Six things a developer should know about Postgres
Why enterprises are choosing PostgreSQLDownload now
The path to CX excellence for B2B services
The four stages to thrive in the experience economyDownload now