The New Year greeted me with a blog post from Dan Tunkelang, chief information scientist at LinkedIn. I’m guessing based on earlier blips across my radar that Tunkelang serves as the chief big data officer for B2B behaviorists.
It’s Tunkelang’s responsibility to place a cap and plug or two on the fire hose of information. It’s still not drinkable for the average consumer but the spray alone can irrigate quite a few promising fields (or what Tunkelang might call data products – the ability to exploit a recurring experience that can be enhanced, neutered, or packaged into some new mutation).
This is heady stuff. Owning the formula for rationalizing the collective cognitive sensation of the online clickstream on earth and what’s worth noticing is not just for disciples of the Patriot Act. Figuring out an explanation for what happens between when we land on a page and what compels us to hit <send> is the cosmic mystery of our commercial age.
In the piece Tunkelang begins to unpack Abraham Maslow’s polemic on human motivation as a hierarchy of needs. Maslow’s work was not inspired by traffic patterns between servers or calls to databases but was engineered through his chosen field of psychology. Maslow concluded with an ideal – not a data product. Self-actualization was not premised on field studies or repeatable experimentation. He knew it when he saw it … in Einstein, Thoreau, Jefferson, Huxley, Jane Adams, and other high thinking boundary crashers.
It’s interesting that Tunkelang would recast a foundation as broad as human motivation on the subjective grounds of Maslow’s work. Maslow had personality analysis and his intuitions. Tunkelang has petabytes to evidence his computer models. One perspective based on a rich, interior life; the other one patterned off the hall of social media mirrors we hold to our surface reflections and virtual connectedness. Perhaps these differences are not conflicting and take a backseat to the core of this framework:
These people were reality-centered, which means they could differentiate what is fake and dishonest from what is real and genuine. They were problem-centered, meaning they treated life’s difficulties as problems demanding solutions, not as personal troubles to be railed at or surrendered to. And they had a different perception of means and ends. They felt that the ends don’t necessarily justify the means, that the means could be ends themselves, and that the means — the journey — was often more important than the ends.
Tunkelang sees self-actualization as a tool for framing perception. This harkens back to a time of professional distance objectified by the late 20th century mass journalism ideal of bias-free reporting. We’ve gone well past what sociologists like Daniel Boorstin proclaimed in The Image, his ground-breaking pre-McLuhan polemic. Borstin argued that most events were no longer spontaneous but orchestrated as pseudo-events and confused for public changes to the private world that concern me, a.k.a. news.
Fifty years on we don’t question that perception is reality. We’re no longer starved for information. Our hunger is for absolutes. Our excuse for inaction forms not from a lack of information but resolve on what to do with it, a.k.a. uncertainty. Our bias today is not red state, blue state 1-2-3. It’s that our forebears could afford more daring as if they came from a surplus of certainty – the biggest rear view distortion of all historic fictions.
Perhaps Tunkelang’s choice of Maslow is to guide an awkward baby giant like big data through the earnest compass of the self-actualizers Maybe the thicket of IP addresses, browser versions, and click patterns that tangle through a congestion of transactions is what tomorrow’s information scientists can use to define reality, or at least clarify the boundaries that encircle it? We’re now finally getting to where we can assess the reality of the perception.
What Tunkelang refers to as how we interact with and benefit from data is every bit as subjective as Maslow’s basis for a centered reality:
“Indeed, data scientists like my team at LinkedIn spend most of our time converting massive volumes of data into useful information — not just for people to consume directly, but also to power other analyses and products.”
The corollary here: what users consume indirectly are the analytics that LinkedIn processes from information products composed exclusively of these same people. Of course I’m not an insider B2B guy slaving over an arsenal of social media stockpiles. I teach outsiders how to make information work for them without getting too attached to the sources or the labeling or the Darwinian edict of a digital economy that one person’s content is another party’s revenue.
But forget about the free labor that stokes the Facebook furnace. Forget the Pavlovian insistence of Google Suggest. Attention factories treat human curiosity as a natural resource – even when we gorge on an unhealthy appetite of self-selecting rationales of our own reality-making.
How does Tunkelang view the realities of big data? One unflattering view is of its bulky and yet porous nature — a mostly dormant black hole that belies any golden opportunities to exploit it for material, academic, or community gain. In 2013 we are staring blindly into an ever-cascading information surplus that operates inside a vacuum of understanding? The scarcity of our sense-making surfaces in our BS detectors, our acceptance of vocal minorities, and in the shouting matches that result. We don’t ask why. We mask our confusions through the distractions of texting and email.
Tunkelang models a world of attention managers as a community of trust-seekers. It’s not just whether a piece of evidence smells right but our own particular fragrance. After all, we are “often producers of information ourselves,” he points out: “We have an interest in establishing our own trustworthiness as sources.”
Tunkelang defines trust as the communion of authority (reliable provider) and sincerity (good faith provider). The rationale is that you’ll know my beef on Yelp is for real because I’ll get worked up in the future about the same beefy grievances. The problem is that the arms’ length relationship of authority to evidence is in fundamental conflict with the intimacy of direct experience. Our need for self-preservation reduces our ability to represent the collective interest. A blending of the two might be an aspiration but belies the algorithms and trust serums that can be teased out of big data or injected into the conversations of big networks.
That elevated wisdom would bind credibility and authenticity in a state of integrity. In such a state experience informs the voice of authority. That’s an authenticity which may still bring human trust into our digital age.