New Practice Advisory on AI
As smart algorithms, generative AI (GAI), and Large Language Models (LLMs) become more prevalent in the market, engineering and geoscience professionals remain professionally responsible for their work even when it is generated by or includes AI output.
The risks associated with AI-based systems and tools differ and are unique from traditional software risks. These include, but are not limited to:
- Biases: Issues in datasets or algorithm processes (e.g. using non-representative samples), and human cognition (trusting AI-based systems over user judgment and knowledge), can result in inaccurate outputs.
- Trustworthiness: An AI system’s trustworthiness is specific to the context of its use (i.e., an AI-based system or tool may be considered trustworthy in one application, but not in another).
- Lack of repeatability: Users may be unable to replicate or reproduce results from AI-based systems.
- Confidentiality and privacy: By using AI-based software, confidential information may become owned or accessible to a third-party.
- AI hallucinations: An output of a generative AI model can appear to be plausible but is factually incorrect, inaccurate to the inputs, or otherwise nonsensical.
- Human dependence on AI: Overreliance on AI can risk diminishing creativity, critical thinking, and human intuition.
- Privacy and intellectual property: Work product generated by AI may inappropriately use or reproduce content without appropriate permissions or rights to do so.
Engineering and geoscience professionals who uses AI in any capacity must consider their ability to meet the documented checking, direct supervision, document retention, and independent review quality management requirements, as outlined in the Engineers and Geoscientists BC Bylaws. Registrants can review Use of Artificial Intelligence (AI) in Professional Practice online.