Research on the design of intelligent user interfaces for AI and human interaction is an established topic. Researchers and practitioners meet annually to discuss the latest advances in Human-Computer Interaction (HCI) and Artificial Intelligence (AI) at the Conference on Intelligent User Interfaces (ACM IUI). Human-AI interaction has become a major focus for companies and researchers.
The ACM IUI conference grew out of a workshop in 1988 with the title "Architectures for Intelligent Interfaces". The workshop organizers published the 500-page book Intelligent User Interfaces in 1991, and ACMâs global First International Workshop on Intelligent User Interfaces met in 1993. The ACM IUI conference will hold its 30th annual meeting in Italy in 2025, and the call for papers has an interesting list of topics.
AI agents are AI systems that can perceive their environment and autonomously act in response to and on objects in that environment. The conference call for papers has the term âagentâ only once in its long list of topics. The discussion of agents is spread across âIntelligent Applications,â which indicates AI systems are primarily defined as tools to be controlled by users.
From Passive Tools to Human-like Agents
The CEO of NVIDIA, Jensen Huang, described the current paradigm shift as occurring because of the introduction of autonomous AI systems as âagentsâ.
âComputers, softwareâ they're an industry of tools. For the very first time, this is going to be an industry of skills, and you capture that phrase, and you call it agents. But it's going to be, for the very first time, an agent sitting on top of tools [as] agents using tools. The opportunity for agents is gigantic. âŚ
It sounds insane, but here's the amazing thing. We're going to have agents that obviously understand the subtleties of the things that we ask it to do, but it can also use tools and it can reason, and it can reason with each other and collaborate with each other.â
In the past, a mathematical algorithm was identified as an AI system if it learned to categorize objects by observing randomly generated category members. Now, people think of AI systems as intelligent assistants, and they are evolving into proactive agents that predict user requests and act as collaborators.
Yu Huang defined and explained the five levels of AI agents shown in the table below. The first three levels differ in the baseline AI, and then each higher level adds two or more human attributes.
The first set of attributes, perception and action, are the interpreted sensory input function and the motor output function. According to Huang, the more advanced models gain reasoning and decision-making, then memory and reflection, and then generalization and autonomous learning.
The highest level includes personality and collaborative behavior among multiple agents. Personality includes character patterns with cognitive and emotional attributes.
It is interesting to identify the parallels between Huangâs sequence and my summary of Dreyfus and Dreyfusâ sequence for human learning shown below.
STAGE NAME: FUNDAMENTAL FOCUS OF COGNITIVE PROCESSING
- Novice: Objects (each object has attributes, such as shape and permitted interactions with it). This uses perception and action to act on Gibson affordances, which relate an agentâs capabilities to the features available for action in the environment.
- Advanced Beginner: Situation (similarity of current situation to one or more previous situations). This introduces the use of reasoning and decision-making because they are necessary to understand and act in situations.
- Competence: Patterns (abstraction and recognize similarities across situations). This introduces the memories of situations and the capability of reflection to abstract similarities among them.
- Proficiency: Goal (reevaluation of similarities based on a particular criterion for success). This introduces generalization and autonomous learning.
- Expert: Roles (the playing style of an opponent is included as a factor when reevaluating the weightings). This introduces a knowledge of the âotherâsâ character patterns with cognitive and emotional attributes.
- Mastery: Purpose (allows conscious control to override and/or modify any learned response). This level is solely a human capability that allows humans to âstand outside themselvesâ because of the difference between self-awareness and consciousness.
As AI systems shift from useful tools to collaborative agents, the way we interact with them will change dramatically. Agentic AI is evolving rapidly, and AI agents, such as future releases of Pieces, will be able to overhaul typical software development workflows to make work easier and developers even more productive.
According to Jensen Huang, âWorking with AI will soon be just like onboarding new employees.â He has explained that the computer industry will be reinvented in all aspects.
You can't solve this new way of doing computing by just designing a chip. Every aspect of the computer has fundamentally changed, and so everything from the networking to the switching to the way that computers are designed to the chips themselves, all of the software that sits on top of it and the methodology that pulls it all together, it's a big deal because it's a complete reinvention of the computer industry.
Human AI Collaboration Is Coming Soon
Within a short time of beginning to interact with a natural-language processing (NLP) computer, people attribute human-like feelings to it. This first happened in 1966 when an MIT researcher deployed an NLP system called ELIZA. There was no database storage or reasoningâonly an âactive listeningâ process that echoed back the input prompt as a question.
Gradual progress was made over the years, with IBM building the first (small) statistical language model in the 1980s. ChatGPT, the first large language model (LLM), was released near the end of 2022. The time between releases of more advanced, larger models has now decreased to weeks, sometimes even days. There has been a definite shift in AI-human interaction.
For example, Piecesâ unique workflow design allows users to interact with the more advanced AI models in ways no other tool provides. A recent blog post provides 20 novel AI prompts (with demos) for many different types of questions as prompts. The examples can be used as templates for four types of prompts:
- Researching and Problem Solving in the Browser
- Coding in the IDE
- Collaborating with Colleagues
- Context Switching Across Various Tools and Improving Productivity
The fourth category is focused on a new capability that is intensely valuable now and will become even more valuable in the future. Pieces maintains a âLive Contextâ workstream that supports the userâs workstream as work is done. The example questions include:
- What from yesterdayâs research can help me here?
- What are the steps to implement this tool based on their docs?
- Whenâs the best time to set up a sync with my team based on our shared calendars?
- What are the main roadblocks for this release based on open GitHub issues?
The AI context memory that supports these types of questions is the local (on-device) central repository that syncs across the three main pillars of a developer's workflowâ(1) researching and problem-solving in the browser, (2) coding in the IDE, and (3) collaborating with colleagues on communication platforms. It also supports broader use cases for reducing context switching and improving overall productivity.
Piecesâ Workstream Pattern Engine autonomously stores information from the userâs workflow. There is no danger of it keeping private information or being open to security vulnerabilities. It is there so that the user can âremember anything, interact with everything.â
Soon, Pieces will release a proactive agent that provides a âfeedâ technology. It gives developers helpful resources based on things they interact with. The engine knows what you are working on, what you have worked on in the past, and what you might work on in the near future.
For example, when a user clicks on a new repository, Pieces will be able to instantly supply useful code snippets and links based on the code in that repository. When this is coupled with an autonomous AI Agent, there could be a collaborative increase in productivity. In the future, Pieces also will solve problems straight from your command line.
On the day I originally submitted this post, GPT-4o was released with a beta feature called Canvas. It divides the chat into two parts, one stream for the user and the other for ChatGPT. For example, prompting for code will display the code on a canvas area that is separate from the chat conversation.
The focus of the release is their âTraining the model to become a collaborator.â It states:
âWe trained GPT-4o to collaborate as a creative partner. The model knows when to open a canvas, make targeted edits, and fully rewrite. It also understands broader context to provide precise feedback and suggestions.â
This is a new version of ChatGPT-4o rather than just the old version within a wrapper.
Conclusion
Piecesâ live context is active now. In the future, Pieces will take natural input from the userâs normal workflow activities and act on their behalf. It will solve tasks for you based on what it learns from your current work.
If you want to build on-device AI agents now, and not be concerned about cloud-based providers, explore the Pieces Python CLI open-source project. It intends to be the central AI operating system for a world of AI agents.
If human and AI interaction interests you, here are some suggested sources for the latest news and updates about human-AI interaction.
- Science Daily Artificial Intelligence News is a good source of all types of information about AI.
- HumanâArtificial Intelligence (AI) Interaction: Latest Advances and Prospects 2024 special issue. Manuscripts are due 20 December 2024, and the annual issues from previous years are available,
- The Economic Times of India provides a more global view.
User-interaction researchers have created and studied guidelines for human-AI interaction. The researchers who wrote âA Comparative Analysis of Industry Human-AI Interaction Guidelinesâ examined the sets of guidelines issued by Google, Microsoft, Apple, and others. The researchers have created an open-source website for AI interaction guidelines.
Human-robot interaction AI applications currently exist, and robots can act as autonomous agents. There is a web portal for that information, and ACM Transactions on Human-Robot Interaction is a peer-reviewed interdisciplinary journal on human-robot interaction.
Top comments (0)