By Andreas Persidis, CEO Biovista Inc.
Are you by any chance, like me, beginning to get the feeling that when you are searching online, your digital profile may be getting in the way? That too much of this “personalized” cerebral pampering, too much of this “right information, to the right person at the right time” thing, is not always what you need?
I first got this feeling as I was browsing YouTube, when after having viewed a number of videos on a subject I was interested in, the same related videos kept being proposed, over and over. It felt like “seen that, done that, boring…”. Same with Amazon book recommendations; they were beginning to get “too close” to my original search and that was not exactly what I was hoping for. The feeling was that I was being confined in an information enclosure and that I would need to make a concerted cognitive effort to get out of it – say start a totally unrelated search – which was not exactly what I wanted.
Different profile in different contexts
Now I’m not saying that this “viewers who bought/looked-at this also bought/looked-at that…” feature is such a bad thing. But I think there are situations where this, let me call it “extreme profiling”, may negatively affect the quality of service you get. One such situation that springs to mind is scientific discovery in the life sciences. When I am in “discovery mode”, yes, I have a rough idea of the research area I am exploring and yes, I do not want to be served duplicate or obviously unrelated data. But I also want to find possibly related things that I am not aware of and that could lead to my “aha” moment.
Cognitive institutionalisation
So who gets to judge what is “possibly related”? Currently and increasingly, it is algorithms that profile me based on one or more of (a) my own “digital footprint”, (b) the degree of overlap of my footprint with the footprint of “others-like-me” and (c) the footprint of those others-like-me. Again, this can give some good results, especially for an advertiser looking to reach potential customers. But when it is overdone, when the fit becomes too tight, it begins to have the feeling of “inbreeding”, of “cognitive institutionalisation” that may hinder rather than support discovery.
The cleverer Google gets at knowing who I am and what (it thinks) I want, the harder it will be for me to use my search results to “think out of the box”, to apply “lateral thinking” and to access these possibly related things that could lead to a new discovery. So what is the best way to support discovery in our world of big data?
One “old school” method is the traditional Boolean keyword-based search that returns most everything under the sun and then invites the user to “have fun sorting through it all”. The other approach is to do it the way our modern search tools (Amazon, Google, YouTube etc.) do it; but as I said, I think this approach has issues too, especially in a scientific discovery setting.
What filters, who applies them and when?
So it seems to me that some useful questions we need to ask are the following:
- What kind of filters should we be applying in order to present search results to scientists?
- Who should decide which filters to apply? The algorithm or the end user who could get to pick clever, well described filters from a menu of these?
- What is the best time to apply these filters? At the beginning, before the search results are presented, or maybe in a step-wise manner as a result of user input? How can we capture and augment the scientific browsing process in a way that helps scientists effectively zoom in to potential discoveries?
As more information becomes available and as the need to understand interconnections between scientific chunks of knowledge becomes more pressing, we will need to keep thinking about the challenge of designing resources that support discovery in a way that makes the best use of both the machine and the human. It’s a question we were asking in the early days of AI, and I think it is now resurfacing, more pertinent than ever.