We get, process, or generate various types of data, i.e., Xml, Json, etc. Although these data types serve the required purpose, but what if we make this data machine interpretable. Currently all the mentioned data types are machine processable and are human understandable, but machines can not interpret them or reason over them. Therefore, there is a need of data format which can make data both machine interpretable and user understandable. Here, to our rescue, semantic technologies makes an entry and provide us with Resource Description Framework (RDF). Beauty of RDF lies in presenting each entity of this world with a Uniform resource identifier (URI). This makes the data machine interpretable as it makes easy for machines to consume, process or crawl (to) data. RDF is a data model/triple model/graph model. Each sentence about an entity consists of subject, predicate and object. This essentially shows every sentence in this data model consists of three parts. First part, i.e., subject deals with the entities (e.g., chair, event) — thing/noun about which we mention information, and second part, i.e., predicate represents the relation/features of subject and finally third part, i.e., object deals with the actual value (literal/URI) of the subject with respect to the specific predicate. At one end, RDF makes data smart/machine interpretable/machine reasonable, but on the other hand RDF data is much more verbose since each entity/property is represented by a long URI. I think currently we don’t have memory/storage issue since huge storage is either provided at low cost or the same needs are satisfied via cloud clouds. So the current need is to make data more intelligent, machine processable and sharable. This suggests that we should go for RDF representation. This small overview makes clear ground for RDF data format; within the semantic world we often come across with another buzzword, i.e., linked data. What is this “linked data” beast?
Linked data is nothing different, but the RDF representation of data plus the accessing mechanism should use HTTP as a transfer protocol. This concept was introduced by Tim Berners-Lee, the father of WWW. He mentioned certain benchmarks for the data to be called as linked data. Here, for simplicity I am copying the same benchmarks as:
- Use URIs as name for things
- Use HTTP URIs, so that people can look up those names
When someone looks up a URI, provide useful information, using the standards (RDF, SPARQL)
Include links to other URIs, so that they can discover more things
Being in travel from last 24 hours, I can collect some of the benefits of using linked data as:
- Data is machine interpretable as RDF is used as a data model. Hence machines can use reasoning to infer new knowledge
- Data can be stored at any place/database and still we at any place of the world can access it as the basic model stores information with URI accessible via HTTP
- Data is linkable. We can link our data with a number of relevant entities/sources. This makes information reusable, sharable, and more self-descriptive
- Linked data concepts make available information less redundant as more links/sources/metadata make it differentiable from other entities.
Conclusion: Linked data is the need of the hour. Let’s contribute to the open linked data cloud.
Note: please pardon if any typos are found and please inform me if any contradictory statement is presented