Information from AI sources need to be heavily audited. My recent experience with Google's NotebookLM AI tool gives me reason for concern. I uploaded some documents and asked that it produce a video. The video it created was very information but included information from sources other than the sources I provided. Without the sources being identified, there was no way that I could confirm the content added by NotebookLM. And, without that confirmation, there is no way that the video could be trusted. Without that trust factor, liability concerns would be paramount.
I suspect that because AI tools provides its "product" in an easily digestible, human-like form, it will be taken as gospel when in fact, it shouldn't be. At present, for the average person, output from AI should be thought of as an informed, well spoken friend that may be mis-informed.
If we can ever get back to a sane government, one of the first laws to be passed is that the government (federal, state or municipal) should pay the legal fees plus a little extra of the defendant if the defendant prevails. This would prevent defendants from being intimidated by governments and forced to settle even though they are innocent.