Hi, I am actually working with Semantic Web technologies and there is a lot of going on. Linked Data has got a lot of development in the academic field. Our projects are in archaeology, cultural heritage and linguistic domains. Basically, RDF is one of the standard formats now to publish and integrate data from different places. There are also tools to generate RDF out of relational databases or any other formats (check out D2RQ, ontop, Karma, RDF extension for openRefine or rdflib python package). For triple stores, I'd recommend Blazegraph, we use it on many projects and so far for our not really big data it works fine. Other triple stores in addition to Fuseki and Virtuoso will be RDF4J, GraphDB.
The most recent hype in Semantic Web is related to word embeddings and word2vec technology, so the idea is basically to convert RDF graphs into sequences of relations and entities (RDF2vec). Shortly speaking, that would give us higher quality entity and relation predictions. Means that we will be able to extract much more semantics from large text corpora.
Thanks for your comment!
Have you had a chance to look at Amazon Neptune?
Also, what kind of tools do you use to build ontologies?
I haven't tried Amazon Neptune. We mostly try to work with an open source software since we are in academia.
For ontologies modelling, I use protege.
Here is a list of more options.
I'm curious, too! I've worked with all that linked data stuff at university and I really like the concepts. SPARQL is a pretty cool query language and the idea that all information is connected in a decentralized way is good. However, I think the main problem is adoption. In order for the semantic web to become useful it needs to be used way more extensively. And another problem is the tooling. For instance, as far is I know there are basically only two triple stores (Fuseki and Virtuoso) and I found them both awfully slow.
EDIT: Just in case you're interested: on top of a university project to semantically classify RDF properties' relevance according to their respective classes, I built a linked data-powered trivia application as kind of a proof of concept ;-)
One of my colleagues has attended a conference where it was said that by the end of the next year 80% of enterprises will have their taxonomies and ontologies, but I just don't see this happening yet.
On the topic of triple stores, there is AllegroGraph and Amazon has recently launched Neptune, their graph database as a service. It supports both triple stores and property graphs. But that's just for the storage, lack of development tools frustrates... 😐
What was mentioned as a reason for that growth? RDF is around since 2004 but it's not widely used. Maybe 80 % of enterprises will have a few ontologies for some purposes, but I don't think the semantic web will ever take off.
What project are you using those techniques in?
I'd have to double check that.
We're not using it yet in production, but are willing to adopt. I work in market research and it would be used to connect and link statistical data from various industries and countries, for instance sport matches, their attendance, team owners, companies that support them, etc.
In the Netherlands, Semantic Web technologies are currently getting some traction in construction/BIM and asset management areas. Especially for projects where multiple construction and maintenance parties are involved, each with their own vocabulary, these technologies can assist in more effective and reliable transfer of information between parties.
Around these ideas, the COINS standard for exchange of BIM information was developed. It started as a standard for use in the Netherlands, but I heard they were planning on European or international standardisation. Next to that, there is the CB-NL, a central ontology against which parties can link their own ontologies. This allows mapping data in models created by different parties (via the CB-NL) without explicit alignment between these parties.
As far as I know, these standards are not widespread yet, and are currently mainly used in small, experimental settings.
We're a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.