Autonomous systems can be programmed to always make the logical, “best” decisions, given a set of circumstances. But what happens when human judgment and decision-making is introduced to a system? Xuan Wang, an assistant professor in George Mason University’s Electrical and Computer Engineering Department, is asking this question as part of a recent $344,000 grant from the National Science Foundation.
Wang stressed that this research is particularly important given technologies on the horizon. “The operation of many real-world systems involves the co-existence of human and autonomous agents. Inadequate coordination among these agents can lead to significant performance degradation or safety risks.”
Wang is turning the idea of humans controlling machines on its head. “The key novelty of this research is, instead of thinking about how humans can program robots, we are thinking about the ways that the autonomous agents can impact humans," he says. "Assuming human response can’t be coded in the way we can control a robotic agent’s behavior, then how we can design the robot’s behavior so they're impacting human behavior in a way that is beneficial for the overall system?”
Because human agents, who are very diverse, use observations to see occurrences in the world around them and respond accordingly, traditional optimization approaches are less effective at predicting behavior. Wang says that he’ll use a framework relying on game theory, which assumes each agent has their own objective function, and that function is coupled with another agent’s decisions and actions. Then both human and autonomous agents ideally will optimize their overall behavior to coordinate across a whole system, creating a better output.
This human-response alignment mechanism is bidirectional, allowing for communication moving in opposite ways at the same time. For robots, they will investigate new approaches that allow them to adapt more intelligently to human behaviors with uncertainties; for humans, they will study how they can be incentivized during human-robot interaction so that human responses favor the efficiency and robustness of the entire system.
But how can systems—autonomous or controlled by humans—ever guarantee safety, say in the use of unpersoned vehicles?
Wang says, “When we are deriving safety criteria, there might be some uncertainties, so given the inputs of the system there will be an upper and lower bound that allows you to know what is the worst case that will happen. Given that, if all assumptions are satisfied, one can guarantee that there will be no crash.”
Wang and team are also working with the Army Research Lab to develop collaborative autonomous vehicles working in unknown environments, ensuring the vehicles can coordinate and gain advantage when there are potential threats in the environment.
In This Story
Related Stories
- October 15, 2024
- May 9, 2024
- March 4, 2024
- November 10, 2023
- July 27, 2023