7 MIN READ

A.I. in Market Research: What it Means to be Human

Chris Martin

The Challenges of Integrating AI into Online Resea...

Everything seems to come with a sprinkling of AI fairy dust these days. Whether it’s summaries of to...

7 MIN READ

Annette Smith

    AI (or artificial intelligence) is a staple of dystopian science-fiction, so much so that it’s almost become cliché. Even the mention of AI conjures images of oppressive robot overlords single-mindedly seeking to eradicate the human race. Warnings of an apocalypse at the hands of our own artificial creations were echoed by Stephen Hawkins and Elon Musk in 2014. Yet the research industry continues to march steadily towards an automated future led by machine learning algorithms and intelligent models. Surely we’re creating the future we’ve been warned of?

    Well, no. The future of market research is one where researchers and AI work in tandem to create insights that would have been previously unobtainable. Understanding why the role of artificial intelligences in research is collaborative rather than hierarchical involves knowing what it means to be human, as well as what it doesn’t.

    Introducing Robot Artists

    It’s long been accepted that machines are more capable of processing data than humans. Provide an input, dictate instructions and an outcome is produced. This simple model is what almost all current computational models are built upon. It is the foundation of automation in market research. But to move from simple automation to real intelligence, robot researchers must do more than just follow instructions – they must make decisions.

      Tweet This
    "To move from automation to fully fledged intelligence. AI must not just follow instructions but make decisions."

    So, is a robot capable of making decisions? That’s a deeply philosophical question to unpack. One potential answer is provided by AI artists. These are bots created to design works of art – each with a unique approach. Some crowd source brushstrokes from online applications, while others use paint to imitate photos and real-life surroundings. Importantly, regardless of the decision making process – the finished pieces are indistinguishable from those created by human artists. The value of the artwork is determined by the audience, not creator.

    Of his robots, software developer Pindar Van Arman says: “Was it the robot or me that made the painting? It made every aesthetic decision that I typically make when I am commissioned for a portrait.” It seems, therefore that the answer would be: yes, robots can replicate a human decision making process. But can the same be applied to market research?

    Replicating vs. Understanding

    Replicating a human decision making process is one feat, but understand (and more importantly empathising) with it is another. This is the key to successful market research – looking beyond the words and numbers and getting to grips with the emotions, attitudes and values that underpin behaviour. To make the transition from artist to researcher, it is this valley that artificial intelligences must cross.

    But even looking towards the most advanced machine learning models (such as Google’s seq2seq experiments), there is only a limited degree of understanding. This is because intelligence in robotics refers to the ability to learn and adapt independently of human input. It is not equal free thought – an important distinction to make. Machine learning models are, at their core, learnings through trial and error. Digitised psychological conditioning.

    The notion of classical conditioning as a psychological learning method can be traced back to the early 1900’s – where it was popularised by Ivan Pavlov’s now infamous experiments. Repeated in many forms, these experiments highlighted that a physical (non-conscious) response could be generated by creating mental associations with stimuli through repetition and feedback. For example, a dog would learn through association that those wearing lab coats bought food (the stimulus) – so over time began to drool upon seeing lab coats, rather than the food itself.

    Operant conditioning, coined by B.F. Skinner almost three decades later, built on this model to describe how positive and negative reinforcement (the removal of averse stimuli) strengthened associations and acted as a more direct behaviour modification mechanism. In many ways, machine learning algorithms (the current frontrunner for AI models) can be likened to this model of operant conditioning. The algorithm learns based on feedback – either automated or manual – which response is correct and which is wrong.

    A.I. Today and Tomorrow

    If the AI of today are able to learn through operant conditioning, their application in market research is still somewhat limited. Their role will still be one of automation - replacing simple, repetitive tasks. Analysing quantitative results, sending personalised prompts to survey non-completes and even choosing appropriate research methods are all tasks that could feasibly be automated. They can all be learnt through trial and error (given enough time and computing power).

    But today’s artificial intelligences aren’t capable of becoming a researcher, or even a moderator. These are tasks that require empathy over understanding and emotional intelligence over awareness. A successful moderator must do more than recognise the emotion behind words – they must understand why the subject is emotive and what deep-set (potentially unconscious) values are driving it.

    It is these complex tasks where A.I. must develop beyond automation to address the human aspect of market research. The roots of more natural, human intelligences have been seeded by attempts to replicate neural networks and organic learning. Could this lead to a day where machines understand what it means to be human better than we, ourselves do? If we were to entertain the macabre, dystopian future of science fiction – then yes, absolutely.

      Tweet This
    "Robots which replicate neural networks are capable of organic learning will be those poised to become researchers."

    But in reality, no. Even if science could create the perfect, organically learning artificial intelligence, it would be able to do no more than modern researchers – form an opinion. Why? Because the reason behind emotions can only be hypothesised, not proven. Whether we choose to accept or reject the judgement on an A.I. researcher is still in our hands.

    So instead of focussing on creating artificial intelligences that attempt to replicate human judgement, the research industry would be better served by those which complement it. Automation is the perfect starting place. Removing the need for repetitive, time consuming tasks frees up time for researchers to concentrate on what they do best – think, reflect and feel.

    Where artificial research intelligence goes next is up to the industry as a whole. But one thing is for certain: its role will be to complement and enhance (not replace and destroy). We’re safe from the robot overlords for now.

    You might also like...

    Blog Featured Image Header

    Delivering AI Powered Qual at Scale...

    It’s safe to say artificial intelligence, and more specifically generative AI, has had a transformative impact on the market research sector. From the contentious emergence of synthetic participants t...

    7 MIN READ
    Blog Featured Image Header

    How to Use Digital Ethnography and ...

    In one way or another, we’ve all encountered social media spaces. Whether you’ve had a Facebook account since it first landed on the internet, created different accounts to keep up with relatives duri...

    7 MIN READ
    Blog Featured Image Header

    5 Ways to Power Up Your Insight Pla...

    In case you missed it (which seems unlikely), ChatGPT, the Artificial Intelligence model trained for conversation interactions has been making waves in the last few months. But once you’ve finished as...

    7 MIN READ