Q&A: Conrad Wolfram on communicating with apps in Web 3.0
Conrad Wolfram explains how applications will increasingly encode the expertise of humans, to give us an easier time sorting through data on the web.
The arrival of Wolfram Alpha last year brought the work of Wolfram Research into the mainstream, with many hoping the search tool would rival Google's.
But it quickly became clear Wolfram Alpha was something entirely different. Its developers call it a "computational knowledge engine". Indeed, rather than direct users to a page on the internet, it comes up with the answer to the question.
Such automated, real-time answers will spread to the rest of the web, according to Conrad Wolfram, managing director for the company in Europe.
The British scientist spoke to IT PRO about how embedding knowledge will help make the wealth of data on the web easier to understand.
How do you see what you're working on and what's the future of the web through your eyes?
I think we're going to see a much more active web. In a sense, a web where stuff works on those websites, rather than just dead information.
Now some people have started hinting, maybe that's Web 3.0, where you're actually generating new knowledge, the computer is generating new information, rather than perhaps with Web 2.0 [where] the humans are generating the actions.
I suppose with Web 2.0, a lot of people talk about how the human users generate the content. I think we're now getting to an era where that real-time content generation can not only be directly from humans, but from where the computer is producing new results in real time, responding to a question.
Wolfram Alpha does this, which is why it's different from traditional search engines it doesn't just search other people's answers.
It's clear how Wolfram Alpha works for search, but how would it work for the rest of the web as a whole?
One thing that's happening with Wolfram Alpha though you haven't completely seen it yet is that when it generates an answer, it's generating a mini-application automatically that it's posting on the web.
There are a few examples where you can try this, and you get pull downs and things automatically generated. It's this idea that you're going to have instant applications as a way to communicate on the web, so things that actually operate, generated not just once, but custom generated for each use, so to speak, if necessary. I think it's the direction we're going.
We've got technology that can make instant applications automatically. The connection between readers and authors is going to get much closer, so that often an author will set up, instead of just writing something. You can imagine a journalist, as an example, setting up an application to represent some story they're writing, which the reader can then interact with.
That happens a little bit today. Look at the BBC site. They sometimes have specific applications, but it takes an awful lot of work to set that up.
The new thing will be that those will be instant, easy, something that essentially most people generating information like that will be able to do. As an example, we have a site called demonstrations.wolfram.com where there are close to 6,000 mini applications posted.
At the moment, they can just be read with the Mathematica player, which is a free player that can be downloaded, but that's an example, if you like, of knowledge apps that have been posted by non-specialists in programming.
I think you'll see much more of that, not only in science and technical areas, but in things like journalism, wherever someone wants to communicate some ideas, where they might have used a graph or a chart before. In the future they will use an application.
One of the trends in the UK and elsewhere this year has been freeing up public data. How does that come into play with something like this?
It's great that Tim Berners-Lee and others have got the data starting to be released. The turnaround of what happened before where governments wanted to charge for the data, to having it for free is clearly a really good start.
But if you really want to democratise government information, so that everyday people can access it, you've got to make it accessible as well as available.
Accessible means that you need to be able to interact with the information yourself easily.
The only way to do that is the sort of way that Wolfram Alpha does things. where you can compute information directly. The information is set up so you can ask questions and it can compute new answers for you, or so that people can instantly get applications made which represent the set of data they are looking at.
But right now, if you want to interact with that data, anything other than eyeballing it, you have kind of got to be a programmer.
That situation really has to change. I noticed David Cameron at his TED Talk was talking about exactly this, and being able to really have citizens drive checking up and seeing what governments were doing.
That's only possible if citizens can actually interact everyday with the data without being themselves specialists. So you need to encode a certain amount of expertise into the applications to allow people to be able to interact this way with the information
I think what we'll see is this sort of encoding of expertise one level down, where in a sense the expert informs the building of the kind of answers you're interested in, through a layer of automation that the computer and the application provide.
In a way, this is what Wolfram Alpha itself does, isn't it?
It is, although I think we'll see ever richer output forms.
In the future, I think we may be able to have cases where those are applications. Instead of just getting a chart, you have something interactive to work with.
What that does is give a much higher bandwidth of communication. What that means is that communication between the author whether that's a computer author or that's a human author and the reader is a much higher bandwidth communication flow, because the reader can really interact with the information that's being presented rather than just look at these lower bandwidth pictures and things.
The bigger picture here is, as we automate more in life, more in the world, what is it that we as humans really need to know, in terms of the base way it's working, as opposed to what do we need to know to operate the automated systems?
There are many areas of life where we're very used to that idea. I always give driving a car as an example. A hundred years ago, you clearly needed to know how a car worked in order to drive it, you needed to know how to advance the ignition and things. Nowadays you don't.
The act of driving, operating the automation of the car, is pretty separated from the act of knowing how to build a car or how to maintain it.
There are many areas where the automation will allow that kind of process to occur, but with knowledge rather than physical actions.
How long before we start to see applications like this?
I think that it's starting. A big area will be [ebooks]. Right now, when people talk about ebooks, they talk about basically electronic PDFs or versions of the books. They're just sitting there. They're relatively dead.
I think that with the great interest in [ebook] readers and the iPad and so forth, I think we're going to see real interest in using the technology to "read" things much more actively.
..If you're talking about maths or science or engineering, we're going to see some changes to how people write things up.
In a sense, our demonstrations site is an early example of that kind of publishing. We have people who have published their research results as demonstrations instead of writing papers. It's a very different way to interact with the knowledge they've come up with.
I think we're starting to see this change. I would say we'll see many things pop out in the next couple of years, where instant knowledge generation, computing new results, is a very regular and important part of what people do, in the same way as looking stuff up on the web. It's just part of everyday life.
I think it will start in the next couple of years, but it will be longer before it's very ingrained in what people do everyday.
With that automation in mind, how much of Wolfram Alpha is automated? Earlier this year, our sister title PC Pro posted a blog post testing the site with some pretty silly questions - mostly lyrics from songs - and got back some very good answers. How much human input is there in Wolfram Alpha?
It's a complicated process. When we suck the knowledge in, it's a mixture. We're getting better at automating this process, but there are humans who actually look at it and try to figure out how to structure the information and curate it and make it accurate.
There's a fair amount of human input, but there's increasing automation in that sort of scanning process.
This is a general theme I guess we have. Part of Wolfram Alpha is encoding expertise of humans. We don't feel we can do it all by being smarter or having the right algorithms. We can absolutely assist that, but it's automated assistance of humans, rather than necessarily doing the whole thing, in terms of getting the information in, automatically or by computer.
Humans are consuming this information at the very end, so one wants to automate the process very much in between. One also wants to figure out what it is humans are interested in knowing, at some level.