Information from AI sources need to be heavily audited. My recent experience with Google's NotebookLM AI tool gives me reason for concern. I uploaded some documents and asked that it produce a video. The video it created was very information but included information from sources other than the sources I provided. Without the sources being identified, there was no way that I could confirm the content added by NotebookLM. And, without that confirmation, there is no way that the video could be trusted. Without that trust factor, liability concerns would be paramount.
I suspect that because AI tools provides its "product" in an easily digestible, human-like form, it will be taken as gospel when in fact, it shouldn't be. At present, for the average person, output from AI should be thought of as an informed, well spoken friend that may be mis-informed.
Why do so many people think that they are immune from being pissed on by Trump?