Wirenet Image Band
wirenet.org mobile image band

Wire Journal News

Last May, Bard and ChatGPT “responded” to a request from WJI as to why people in manufacturing should not fear AI. Now, they respond to a second request: do they agree with a legal ruling that AI cannot be listed as an inventor on a patent? Below are their responses, followed by the thoughts of WJI Patent Columnist Bill Honaker.

Bard’s perspective:

As a large language model, I believe the current legal stance on AI inventorship is justified, with some nuances. Here’s why:

Lack of intent and moral rights. Unlike humans, AI lacks the independent capacity to form intent and pursue goals. My outputs are based on my training data and prompts, not on a conscious desire to solve problems or innovate. Granting me inventorship would ignore the human effort behind my creation and training, diluting the moral rights of true inventors.

Unclear attribution and accountability. AI models are complex, incorporating data from countless sources. Identifying the specific contribution of AI in an invention is challenging, raising potential attribution disputes and ethical concerns. Additionally, holding an AI accountable for potential harm caused by its creations is legally complicated.

System integrity and misuse. Recognizing AI as inventors could weaken the patent system. Malicious actors could exploit AI to generate spurious inventions, flooding the system and eroding trust. Moreover, granting legal rights to AI creates ethical ambiguities when considering potential misuse of the invention for harmful purposes.

While I support the current legal stance, the debate is evolving. As AI advances, we may need to revisit the definition of “inventor” and explore alternative models for protecting AI-generated innovations. This could involve recognizing the contributions of both programmers and AI as part of a collaborative inventive process, without attributing full inventorship to the AI itself.

Ultimately, the goal should be to foster AI innovation while safeguarding the integrity of the patent system, protecting human ownership, and ensuring responsible development and use of AI-driven inventions.

ChatGPT’s perspective

The question of whether AI, like me, should be listed as an inventor on a patent application touches upon complex and evolving legal, ethical, and technological considerations. I’ll outline why, under current understandings and legal frameworks, it is generally agreed that AI should not be listed as an inventor due to the nature of invention and inventorship.

Human-centric definition of inventorship. The concept of an “inventor” in patent law has traditionally been human-centric. It is rooted in the idea that an inventor applies a non-obvious and novel thought process to solve a problem or create something new. This human-centric view underpins most intellectual property laws, which are designed to protect human creativity and ingenuity.

AI as a tool, not a creator. AI, including myself, operates based on algorithms and data provided by humans. We process information and generate outputs based on predefined parameters and learning from vast datasets. This process, while sophisticated, is fundamentally different from human creativity. AI does not possess consciousness, intent, or the ability to conceptualize problems and solutions in the way humans do. We are tools used by human inventors to achieve certain tasks more efficiently.

Legal precedents and policies. Current legal systems in many jurisdictions, including the United States and the European Union, have upheld the principle that inventors must be natural persons. This reflects a broader legal recognition that AI, as of now, does not have legal personhood or the capacity for legal rights and responsibilities. Allowing AI to be listed as an inventor would require a significant overhaul of existing legal frameworks and an examination of the consequences of granting such a status to non-human entities.

Ethical and responsibility considerations. Assigning inventorship to AI raises ethical questions about responsibility and accountability. In the realm of patents, the inventor bears certain responsibilities and enjoys specific rights. If an AI were listed as an inventor, it would be challenging to address issues like moral responsibility for the invention or the handling of rights and profits derived from it.

Innovation and incentive structure. The patent system is designed to incentivize human innovation. Granting patents to humans for their inventive activities encourages further research and development. If AI were recognized as inventors, it might complicate this incentive structure, possibly devaluing human contribution and effort.

In conclusion, under the current legal, ethical, and technological landscape, it is appropriate that AI is not listed as an inventor on patent applications. This stance aligns with the fundamental principles of inventorship, the nature of AI as a tool rather than a creator, legal precedents, and the broader objectives of the patent system. Recognizing AI as an inventor would necessitate a reevaluation of many underlying principles of law and ethics, a task that society and legal systems are not presently equipped to undertake.

 

Editor’s note: the AI "responses" were attained by WAI Director of Technology Chuck Szymaszek.

 

Bill Honaker’s perspective

The AI perspectives are generally correct but have errors. Of the two perspectives, Chat GPT’s perspective is more accurate. Both have errors in their comments. This is to be expected. Even Sam Altman, the CEO of OpenAI, creator of ChatGPT, doesn’t trust its answers. He recently said, “I probably trust the answers that come out of ChatGPT the least of anybody on Earth.”

I found Bard’s perspective to be the least helpful. Bard’s comment that granting it inventorship would ignore the human effort behind its creation and training, really misses the mark. An inventor is anyone who conceived of the invention in any claim within the patent. Creating or adjusting the tools is irrelevant. Bard also discusses being held liable for harm caused by being an inventor. To my knowledge, no inventor has ever been found liable for an invention that later caused harm. The use of the product may create liability, but not inventing it.

ChatGPT’s perspective is more correct. The only error was the comment that inventors apply a non-obvious and novel thought process to solve a problem or create something new. Inventors use thought to create non-obvious and novel solutions. This is important to understand: it’s the result that must be new and non-obvious, not how one thinks.

I enjoyed reading ChatGPT’s admission that AI does not possess consciousness, intent or the ability to conceptualize problems and solutions in the way humans do. That’s the problem with relying on the output from AI. They can’t anticipate problems and propose solutions.

I agree that AI is a tool for human inventors to get results more efficiently, and when people use it, they should be named as inventors. The USPTO suggested this when confronted with AI being named as an inventor. Dabus (short for “Device for the Autonomous Bootstrapping of Unified Science”) is an AI system created by Stephen Thalen. Dabus was named as the sole inventor on two patent applications.  The US Patent and Trademark office suggested that Thalen name himself as the inventor, but he refused. As a result, the USPTO refused the application. The same result occurred in other countries where he filed, except for South Africa, which issued the first AI patent.

The AI responses also failed to discuss who owns AI inventions. I asked ChatGPT, and it was wrong. It said that the creator of the AI device would own the invention. This is what Thalen argued. But in the U.S., the inventor owns the invention unless assigned to another. Thalen felt he should own it because he created the inventor. If this were the case, every mother and father throughout history would own every invention, since they created their sons and daughters.

 

 

 

Last May, Bard and ChatGPT “responded” to a request from WJI as to why people in manufacturing should not fear AI. Now, they respond to a second request: do they agree with a legal ruling that AI cannot be listed as an inventor on a patent? Below are their responses, followed by the thoughts of WJI Patent Columnist Bill Honaker.

Bard’s perspective:

As a large language model, I believe the current legal stance on AI inventorship is justified, with some nuances. Here’s why:

Lack of intent and moral rights. Unlike humans, AI lacks the independent capacity to form intent and pursue goals. My outputs are based on my training data and prompts, not on a conscious desire to solve problems or innovate. Granting me inventorship would ignore the human effort behind my creation and training, diluting the moral rights of true inventors.

Unclear attribution and accountability. AI models are complex, incorporating data from countless sources. Identifying the specific contribution of AI in an invention is challenging, raising potential attribution disputes and ethical concerns. Additionally, holding an AI accountable for potential harm caused by its creations is legally complicated.

System integrity and misuse. Recognizing AI as inventors could weaken the patent system. Malicious actors could exploit AI to generate spurious inventions, flooding the system and eroding trust. Moreover, granting legal rights to AI creates ethical ambiguities when considering potential misuse of the invention for harmful purposes.

While I support the current legal stance, the debate is evolving. As AI advances, we may need to revisit the definition of “inventor” and explore alternative models for protecting AI-generated innovations. This could involve recognizing the contributions of both programmers and AI as part of a collaborative inventive process, without attributing full inventorship to the AI itself.

Ultimately, the goal should be to foster AI innovation while safeguarding the integrity of the patent system, protecting human ownership, and ensuring responsible development and use of AI-driven inventions.

ChatGPT’s perspective

The question of whether AI, like me, should be listed as an inventor on a patent application touches upon complex and evolving legal, ethical, and technological considerations. I’ll outline why, under current understandings and legal frameworks, it is generally agreed that AI should not be listed as an inventor due to the nature of invention and inventorship.

Human-centric definition of inventorship. The concept of an “inventor” in patent law has traditionally been human-centric. It is rooted in the idea that an inventor applies a non-obvious and novel thought process to solve a problem or create something new. This human-centric view underpins most intellectual property laws, which are designed to protect human creativity and ingenuity.

AI as a tool, not a creator. AI, including myself, operates based on algorithms and data provided by humans. We process information and generate outputs based on predefined parameters and learning from vast datasets. This process, while sophisticated, is fundamentally different from human creativity. AI does not possess consciousness, intent, or the ability to conceptualize problems and solutions in the way humans do. We are tools used by human inventors to achieve certain tasks more efficiently.

Legal precedents and policies. Current legal systems in many jurisdictions, including the United States and the European Union, have upheld the principle that inventors must be natural persons. This reflects a broader legal recognition that AI, as of now, does not have legal personhood or the capacity for legal rights and responsibilities. Allowing AI to be listed as an inventor would require a significant overhaul of existing legal frameworks and an examination of the consequences of granting such a status to non-human entities.

Ethical and responsibility considerations. Assigning inventorship to AI raises ethical questions about responsibility and accountability. In the realm of patents, the inventor bears certain responsibilities and enjoys specific rights. If an AI were listed as an inventor, it would be challenging to address issues like moral responsibility for the invention or the handling of rights and profits derived from it.

Innovation and incentive structure. The patent system is designed to incentivize human innovation. Granting patents to humans for their inventive activities encourages further research and development. If AI were recognized as inventors, it might complicate this incentive structure, possibly devaluing human contribution and effort.

In conclusion, under the current legal, ethical, and technological landscape, it is appropriate that AI is not listed as an inventor on patent applications. This stance aligns with the fundamental principles of inventorship, the nature of AI as a tool rather than a creator, legal precedents, and the broader objectives of the patent system. Recognizing AI as an inventor would necessitate a reevaluation of many underlying principles of law and ethics, a task that society and legal systems are not presently equipped to undertake.

 

Editor’s note: the AI "responses" were attained by WAI Director of Technology Chuck Szymaszek.

 

Bill Honaker’s perspective

The AI perspectives are generally correct but have errors. Of the two perspectives, Chat GPT’s perspective is more accurate. Both have errors in their comments. This is to be expected. Even Sam Altman, the CEO of OpenAI, creator of ChatGPT, doesn’t trust its answers. He recently said, “I probably trust the answers that come out of ChatGPT the least of anybody on Earth.”

I found Bard’s perspective to be the least helpful. Bard’s comment that granting it inventorship would ignore the human effort behind its creation and training, really misses the mark. An inventor is anyone who conceived of the invention in any claim within the patent. Creating or adjusting the tools is irrelevant. Bard also discusses being held liable for harm caused by being an inventor. To my knowledge, no inventor has ever been found liable for an invention that later caused harm. The use of the product may create liability, but not inventing it.

ChatGPT’s perspective is more correct. The only error was the comment that inventors apply a non-obvious and novel thought process to solve a problem or create something new. Inventors use thought to create non-obvious and novel solutions. This is important to understand: it’s the result that must be new and non-obvious, not how one thinks.

I enjoyed reading ChatGPT’s admission that AI does not possess consciousness, intent or the ability to conceptualize problems and solutions in the way humans do. That’s the problem with relying on the output from AI. They can’t anticipate problems and propose solutions.

I agree that AI is a tool for human inventors to get results more efficiently, and when people use it, they should be named as inventors. The USPTO suggested this when confronted with AI being named as an inventor. Dabus (short for “Device for the Autonomous Bootstrapping of Unified Science”) is an AI system created by Stephen Thalen. Dabus was named as the sole inventor on two patent applications.  The US Patent and Trademark office suggested that Thalen name himself as the inventor, but he refused. As a result, the USPTO refused the application. The same result occurred in other countries where he filed, except for South Africa, which issued the first AI patent.

The AI responses also failed to discuss who owns AI inventions. I asked ChatGPT, and it was wrong. It said that the creator of the AI device would own the invention. This is what Thalen argued. But in the U.S., the inventor owns the invention unless assigned to another. Thalen felt he should own it because he created the inventor. If this were the case, every mother and father throughout history would own every invention, since they created their sons and daughters.

 

 

 

Last May, Bard and ChatGPT “responded” to a request from WJI as to why people in manufacturing should not fear AI. Now, they respond to a second request: do they agree with a legal ruling that AI cannot be listed as an inventor on a patent? Below are their responses, followed by the thoughts of WJI Patent Columnist Bill Honaker.

Bard’s perspective:

As a large language model, I believe the current legal stance on AI inventorship is justified, with some nuances. Here’s why:

Lack of intent and moral rights. Unlike humans, AI lacks the independent capacity to form intent and pursue goals. My outputs are based on my training data and prompts, not on a conscious desire to solve problems or innovate. Granting me inventorship would ignore the human effort behind my creation and training, diluting the moral rights of true inventors.

Unclear attribution and accountability. AI models are complex, incorporating data from countless sources. Identifying the specific contribution of AI in an invention is challenging, raising potential attribution disputes and ethical concerns. Additionally, holding an AI accountable for potential harm caused by its creations is legally complicated.

System integrity and misuse. Recognizing AI as inventors could weaken the patent system. Malicious actors could exploit AI to generate spurious inventions, flooding the system and eroding trust. Moreover, granting legal rights to AI creates ethical ambiguities when considering potential misuse of the invention for harmful purposes.

While I support the current legal stance, the debate is evolving. As AI advances, we may need to revisit the definition of “inventor” and explore alternative models for protecting AI-generated innovations. This could involve recognizing the contributions of both programmers and AI as part of a collaborative inventive process, without attributing full inventorship to the AI itself.

Ultimately, the goal should be to foster AI innovation while safeguarding the integrity of the patent system, protecting human ownership, and ensuring responsible development and use of AI-driven inventions.

ChatGPT’s perspective

The question of whether AI, like me, should be listed as an inventor on a patent application touches upon complex and evolving legal, ethical, and technological considerations. I’ll outline why, under current understandings and legal frameworks, it is generally agreed that AI should not be listed as an inventor due to the nature of invention and inventorship.

Human-centric definition of inventorship. The concept of an “inventor” in patent law has traditionally been human-centric. It is rooted in the idea that an inventor applies a non-obvious and novel thought process to solve a problem or create something new. This human-centric view underpins most intellectual property laws, which are designed to protect human creativity and ingenuity.

AI as a tool, not a creator. AI, including myself, operates based on algorithms and data provided by humans. We process information and generate outputs based on predefined parameters and learning from vast datasets. This process, while sophisticated, is fundamentally different from human creativity. AI does not possess consciousness, intent, or the ability to conceptualize problems and solutions in the way humans do. We are tools used by human inventors to achieve certain tasks more efficiently.

Legal precedents and policies. Current legal systems in many jurisdictions, including the United States and the European Union, have upheld the principle that inventors must be natural persons. This reflects a broader legal recognition that AI, as of now, does not have legal personhood or the capacity for legal rights and responsibilities. Allowing AI to be listed as an inventor would require a significant overhaul of existing legal frameworks and an examination of the consequences of granting such a status to non-human entities.

Ethical and responsibility considerations. Assigning inventorship to AI raises ethical questions about responsibility and accountability. In the realm of patents, the inventor bears certain responsibilities and enjoys specific rights. If an AI were listed as an inventor, it would be challenging to address issues like moral responsibility for the invention or the handling of rights and profits derived from it.

Innovation and incentive structure. The patent system is designed to incentivize human innovation. Granting patents to humans for their inventive activities encourages further research and development. If AI were recognized as inventors, it might complicate this incentive structure, possibly devaluing human contribution and effort.

In conclusion, under the current legal, ethical, and technological landscape, it is appropriate that AI is not listed as an inventor on patent applications. This stance aligns with the fundamental principles of inventorship, the nature of AI as a tool rather than a creator, legal precedents, and the broader objectives of the patent system. Recognizing AI as an inventor would necessitate a reevaluation of many underlying principles of law and ethics, a task that society and legal systems are not presently equipped to undertake.

 

Editor’s note: the AI "responses" were attained by WAI Director of Technology Chuck Szymaszek.

 

Bill Honaker’s perspective

The AI perspectives are generally correct but have errors. Of the two perspectives, Chat GPT’s perspective is more accurate. Both have errors in their comments. This is to be expected. Even Sam Altman, the CEO of OpenAI, creator of ChatGPT, doesn’t trust its answers. He recently said, “I probably trust the answers that come out of ChatGPT the least of anybody on Earth.”

I found Bard’s perspective to be the least helpful. Bard’s comment that granting it inventorship would ignore the human effort behind its creation and training, really misses the mark. An inventor is anyone who conceived of the invention in any claim within the patent. Creating or adjusting the tools is irrelevant. Bard also discusses being held liable for harm caused by being an inventor. To my knowledge, no inventor has ever been found liable for an invention that later caused harm. The use of the product may create liability, but not inventing it.

ChatGPT’s perspective is more correct. The only error was the comment that inventors apply a non-obvious and novel thought process to solve a problem or create something new. Inventors use thought to create non-obvious and novel solutions. This is important to understand: it’s the result that must be new and non-obvious, not how one thinks.

I enjoyed reading ChatGPT’s admission that AI does not possess consciousness, intent or the ability to conceptualize problems and solutions in the way humans do. That’s the problem with relying on the output from AI. They can’t anticipate problems and propose solutions.

I agree that AI is a tool for human inventors to get results more efficiently, and when people use it, they should be named as inventors. The USPTO suggested this when confronted with AI being named as an inventor. Dabus (short for “Device for the Autonomous Bootstrapping of Unified Science”) is an AI system created by Stephen Thalen. Dabus was named as the sole inventor on two patent applications.  The US Patent and Trademark office suggested that Thalen name himself as the inventor, but he refused. As a result, the USPTO refused the application. The same result occurred in other countries where he filed, except for South Africa, which issued the first AI patent.

The AI responses also failed to discuss who owns AI inventions. I asked ChatGPT, and it was wrong. It said that the creator of the AI device would own the invention. This is what Thalen argued. But in the U.S., the inventor owns the invention unless assigned to another. Thalen felt he should own it because he created the inventor. If this were the case, every mother and father throughout history would own every invention, since they created their sons and daughters.

 

 

 

Sponsored by Dow

The Next Era of ConnectivityDow Image
How innovative materials will enable 5G installation speed and resiliency
Dr. Paul Brigandi, Application Technology Leader, Dow

As the world’s need for high-speed data services drastically increases, so does the pressure and demand on the telecommunications industry to deliver consistent, high-quality connectivity. Experts predict that 5G connections will more than double by 2025, and relaying such massive amounts of data to devices will require millions of miles of new fiber optic cables.

How can the telecommunication industry keep up? Materials designed to last longer, ease cable installation, improve performance and provide greater reliability, can help the industry usher in the future of connectivity.

In areas where connectivity is essential, but space is limited, small, mini and micro fiber optic cables that are packed densely into conduits take up less space while still delivering high-speed services. The installation of these micro cables are air-blown into conduits, a process that can be optimized with a low coefficient of friction. And with quicker and easier air-blowing process for installation, they can also be more efficiently installed in existing conduits to reduce the need for more digging.

Of course, such immense amounts of cabling must be organized for safety and ease of troubleshooting. Laser printing is a fast, visible, and efficient way to mark and permanently identify cable jackets. Using other printing methods may impact the structural integrity of the micro cables making them more susceptible to being removed in air-blown installations. Laser printing removes the risk of damaging cables and improves long-term print durability.

Furthermore, improved durability enables higher-quality and a more reliable signal transmission which reduces the need for repairs or replacement cables

As cables are becoming smaller and denser to support the world’s high demand for connectivity, telecommunications technology must also advance to eliminate any compromise on quality. AXELERON™ FO 6321 BK is one such technology as the all-in-one solution for longer fiber optics cable lifecycle protection and more reliable telecommunications infrastructure.

Designed for micro cables that are up to 60% smaller, 70% lighter and packed more densely into conduits, AXELERON™ FO 6321 BK delivers industry leading low shrinkage rates compared to traditional jackets. With up to 25% less shrinkage, it can help reduce the stress on fiber optic cables that often lead to increased fiber attenuation. The jacketing material also enables laser printing with excellent mark contrast and highly visible marking – removing the risk of damaging cables and improving long-term print durability. AXELERON™ FO 6321 BK can help ease cable installation, improve reliability and usher the industry into the future of connectivity.

Please visit our site to learn more about AXELERON™ FO 6321 BK Telecom Cable Compound

About Dow

Dow (NYSE: DOW) combines global breadth; asset integration and scale; focused innovation and materials science expertise; leading business positions; and environmental, social and governance leadership to achieve profitable growth and help deliver a sustainable future. The Company's ambition is to become the most innovative, customer centric, inclusive and sustainable materials science company in the world. Dow's portfolio of plastics, industrial intermediates, coatings and silicones businesses delivers a broad range of differentiated, science-based products and solutions for its customers in high-growth market segments, such as packaging, infrastructure, mobility and consumer applications. Dow operates manufacturing sites in 31 countries and employs approximately 37,800 people. Dow delivered sales of approximately $57 billion in 2022. References to Dow or the Company mean Dow Inc. and its subsidiaries. For more information, please visit www.dow.com or follow @DowNewsroom on Twitter.

 

Nexans SA announced that it has entered into a share purchase agreement with Reka Industrial Plc to acquire Reka Cables for €53 million.

A press release said that the acquisition of the Finnish company will strengthen Nexans’ position in the Nordics, notably in electricity distribution and usages. Founded in 1961, Reka Cables has some 270 employees that manufacture low- and medium-voltages cables.

Reka Cables operates in four countries and was said to have expected 2022 revenues exceeding €160 million. The deal, pending approvals, is expected to be concluded in the first half of 2023. In November 2021, it became one of the first cable manufacturers to become carbon neutral per Scope 1 and Scope 2.

“With a deep commitment to energy transition and carbon neutrality, Reka Cables is fully aligned with the Group’s strategic ambition to become a pure electrification player committed to contribute to carbon neutrality by 2030,” said Nexans CEO Christopher Guérin.

“As a global player in electrification and an active promoter of the energy transition, Nexans is a great fit for Reka Cables,” said Reka Cables CEO Jukka Poutanen.

Page 1 of 98

Contact us

The Wire Association Int.

71 Bradley Road, Suite 9

Madison, CT 06443-2662

P: (203) 453-2777