Earlier this month, several FMG researchers attended the User Experience DC (UXDC) Conference hosted by the DC Chapter of the User Experience Professionals Association (UXPA).
One of the presentations I attended was John Whalen’s talk titled, "The Future of UX is here: AI and Cognitive Design." In his talk, Dr. Whalen discussed artificial intelligence (AI), machine learning, and augmented cognition, with the fundamental point that as these technologies emerge, user experience research needs to expand on traditional usability testing and design thinking to ensure a user-centered design. One takeaway from the session was that command styles vary widely when users interact with devices like Google Assistant and Alexa. This finding was illustrated well when volunteers from the audience were asked to come to the stage and were prompted to give commands to different AI devices. Each person phrased his/her command differently and spoke at a different pace.
This idea aligns with traditional usability testing results: people apply different strategies when completing tasks using a digital platform, comprehending information, and solving problems. Some people enter "best easy lasagna recipe" into Google whereas others search for "what is the most highly rated easy 30 minute recipe for meat lasagna?" It makes sense that the way humans interact with these devices would vary widely based on personality, cultural influence, time demands, and many other factors.
An interesting takeaway from the session was the difference in people’s reasoning for their AI device preference. One group of people from Dr. Whalen’s study preferred the most technologically accurate device whereas another group preferred the device with more personality and humanistic qualities.
For participants who enjoyed the device with more humanistic qualities, accuracy of response was typically not as important. In addition, some participants wanted direct answers with no witty or humorous banter. This affinity for devices with humanistic qualities adds an additional layer to user experience research and design. Sure, we work to make things as user friendly, welcoming, and intuitive as possible. We try to use headlines, language, and imagery in our designs that grab a user’s attention in a friendly, welcoming way. Even with websites and applications becoming more personalized and conversational, most don’t encompass the same level of personality and banter that we are seeing from Siri and other similar devices.
Website personalization has been on the rise for years. On many websites, visitors are shown personalized pages based on their anticipated needs. Big data fuels this personalization and enables companies to segment their audience in real time. Although most user interfaces have been shifting toward a more personalized and human-like interaction, this approach will not meet everyone’s needs. Past work has shown that depending on users’ goals, they may want to be directed straight to the information they are looking for, and a witty or conversational interaction may not be appropriate. Some users simply may not prefer a human-like interaction on a website. Working to incorporate "humanness" into future designs while also considering the users who don’t want the humanistic qualities will pose an interesting design challenge. Will future devices, websites, and applications need a personality on/off switch? Perhaps in the future, audience attitudes toward humanistic qualities will be a critical preference, determining the pages and content that users are shown.
What do you think? Feel free to reach out and let us know your thoughts on this subject. If you were at UXDC, say hi on Twitter or Facebook or LinkedIn—we would love to hear about your experience!