New!Sign up for our freeemail newsletter.
Science News
from research organizations

Can AI be too good to use?

Date:
2023年12月12日
Source:
University of California - Davis
Summary:
Much of the discussion around implementing artificial intelligence systems focuses on whether an AI application is 'trustworthy': Does it produce useful, reliable results, free of bias, while ensuring data privacy? But a new article poses a different question: What if an AI is just too good?
Share:
FULL STORY

Much of the discussion around implementing artificial intelligence systems focuses on whether an AI application is "trustworthy": Does it produce useful, reliable results, free of bias, while ensuring data privacy? But a new paper published Dec. 7 inFrontiers in Artificial Intelligenceposes a different question: What if an AI is just too good?

Carrie Alexander, a postdoctoral researcher at the AI Institute for Next Generation Food Systems, or AIFS, at the University of California, Davis, interviewed a wide range of food industry stakeholders, including business leaders and academic and legal experts, on the attitudes of the food industry toward adopting AI. A notable issue was whether gaining extensive new knowledge about their operations might inadvertently create new liability risks and other costs.

For example, an AI system in a food business might reveal potential contamination with pathogens. Having that information could be a public benefit but also open the firm to future legal liability, even if the risk is very small.

"The technology most likely to benefit society as a whole may be the least likely to be adopted, unless new legal and economic structures are adopted," Alexander said.

An on-ramp for AI

Alexander and co-authors Professor Aaron Smith of the UC Davis Department of Agricultural and Resource Economics and Professor Renata Ivanek of Cornell University, argue for a temporary "on-ramp" that would allow companies to begin using AI, while exploring the benefits, risks and ways to mitigate them. This would also give the courts, legislators and government agencies time to catch up and consider how best to use the information generated by AI systems in legal, political and regulatory decisions.

"We need ways for businesses to opt in and try out AI technology," Alexander said. Subsidies, for example for digitizing existing records, might be helpful especially for small companies.

"We're really hoping to generate more research and discussion on what could be a significant issue," Alexander said. "It's going to take all of us to figure it out."

The work was supported in part by a grant from the USDA National Institute of Food and Agriculture. The AI Institute for Next Generation Food Systems is funded by a grant from USDA-NIFA and is one of 25 AI institutes established by the National Science Foundation in partnership with other agencies.


Story Source:

Materialsprovided byUniversity of California - Davis. Original written by Andy Fell.注:内容可能是edited for style and length.


Journal Reference:

  1. Carrie S. Alexander, Aaron Smith, Renata Ivanek.Safer not to know? Shaping liability law and policy to incentivize adoption of predictive AI technologies in the food system.Frontiers in Artificial Intelligence, 2023; 6 DOI:10.3389/frai.2023.1298604

Cite This Page:

University of California - Davis. "Can AI be too good to use?." ScienceDaily. ScienceDaily, 12 December 2023. /releases/2023/12/231212191932.htm>.
University of California - Davis. (2023, December 12). Can AI be too good to use?.ScienceDaily. Retrieved December 21, 2023 from www.koonmotors.com/releases/2023/12/231212191932.htm
University of California - Davis. "Can AI be too good to use?." ScienceDaily. www.koonmotors.com/releases/2023/12/231212191932.htm (accessed December 21, 2023).

Explore More
from ScienceDaily

RELATED STORIES