AI: The Next Challenge for Plastic Piping Engineers and Designers

By PPN Editor – October 7, 2024
The next major focus for plastic piping engineers and designers will be the increased use of artificial intelligence (AI) tools in the pipeline construction sector.
From writing project specifications to generating detailed design reports, generative AI is being increasingly used by pipeline engineering consultants.
Corporate lawyers will need to navigate the complexities these technologies introduce. This includes addressing issues related to data ownership, liability for software errors leading to incorrect design specifications, and the implications of automated decision-making on traditional roles in the design and construction process.
Information modelling and artificial technology are increasingly being used in designing and planning large construction projects, but this gives rise to new and different risks for engineering consultants to consider.
Ensuring clarity and accuracy in design reports will be crucial as these innovative AI tools and practices become more commonplace.
Adapting to this new technology will shape how pipeline construction projects are executed and will also redefine the legal and contract law landscape.
While AI technology is the next disrupter in the pipeline construction industry who carries the liability for incorrect design calculations and rogue AI responses?
Fake Research Reports
AI-generated fake technical research papers are currently swarming online, which is posing a threat to academic search engines and unsuspecting engineers who cite and reply on the publications.
Academic journals, archives, and repositories are seeing an increasing number of questionable research papers clearly produced using generative AI.
They are often created with widely available, general-purpose AI applications, most likely ChatGPT, and mimic scientific writing.
Some “published” papers included copied content from the ChatGPT bots and include phrases like "I don't have access to real-time data" and "as of my last knowledge update” which is often seen to appear at the OpenAI’s chat bots.
Most of these GPT-fabricated papers were found in non-indexed journals and working papers, but some cases included research published in mainstream scientific journals and conference proceedings.
This can lead to major credibility concerns and tarnish the image of design consultants if the spread of fake content remains untackled and incorporated into new designs.
Beware the Limitations of AI
Before using any artificial “intelligence” (AI), the user should understand that AI is not intelligent, but rather a next generation search engine that can rapidly search, compare, collate, compile and synthesize multiple pieces of information into a document.
AI should never be used to generate technical specifications, test methods, test reports, technical articles, or research papers without the author having a thorough understanding of the subject, and an in-depth technical knowledge.
The author must validate and cross check in detail every piece of information in the resulting document. Prior to the advent of AI, this was already a problem with the “cut and paste” generation of specifications with incorrect material values, test methods and reporting requirements as examples. AI has now exacerbated this risk.
It is conceivable that an AI generated specification may result in an inappropriate material being used in a critical piping applications that subsequently fails catastrophically. In the type of applications catastrophic failure can result in potential loss of life, significant environmental damage, significant financial costs and major reputational damage.
When writing technical specifications, authors should always consult directly with manufacturers, testing authorities, and technical peers to ensure the document is realistic, appropriate, fit for purpose and achieves the required outcomes required of the asset owner.
Many engineering consulting companies such as GHD, AECOM, WSP etc. are placing bans or restrictions on generative AI tools like ChatGPT due to cybersecurity concerns such as potential leaks of confidential information and concerns about data privacy.
However other companies are taking a more nuanced approach instead of outright bans such as implementing guidelines for safe use and creating internal versions of AI tools that don't share data externally because completely banning these tools could put companies at a competitive disadvantage as AI becomes more integral to various industries.