Information from AI sources need to be heavily audited. My recent experience with Google's NotebookLM AI tool gives me reason for concern. I uploaded some documents and asked that it produce a video. The video it created was very information but included information from sources other than the sources I provided. Without the sources being identified, there was no way that I could confirm the content added by NotebookLM. And, without that confirmation, there is no way that the video could be trusted. Without that trust factor, liability concerns would be paramount.
I suspect that because AI tools provides its "product" in an easily digestible, human-like form, it will be taken as gospel when in fact, it shouldn't be. At present, for the average person, output from AI should be thought of as an informed, well spoken friend that may be mis-informed.
It seems odd that there were so many (300+ South Koreans out of ~475 arrested) unauthorized South Korean immigrants working on this job site. Is this how foreign businesses that commit to build facilities in the US operate: import their own citizens to (illegally) work on US projects?