Menu Close

US gov’t task force publishes guidelines for AI innovation framework

The National Artificial Intelligence Research Resource (NAIRR) task force has published new guidelines to strengthen and democratize AI innovation in the U.S.

The task force was formed in 2020 to see if it makes sense to create a national AI intelligence research resource, and if so, identify the best ways to create the resource. Its members are recommending that a resource should be created.

Officials on the task force say the NAIRR should be used to make AI accessible to more researchers in the U.S. It should identify ways to overcome technical barriers that make it impractical to move AI’s immense data files and to share advanced computational power beyond today’s technology nodes and corridors.

“This access divide limits the ability to leverage AI to tackle the big challenges in our society,” reads the latest NAIRR report, and slows progress toward trustworthy AI.

The divide “constrains the diversity of researchers in the field and the breadth of ideas incorporated into AI innovations, contributing to embedded biases and other systemic inequalities found in AI systems today.”

The framework itself should be “broadly accessible to a range of users and provide a platform that can be used for educational and community-building activities in order to lower the barriers to participation in the AI research ecosystem and increase the diversity of AI researchers.”

Task force leaders say that NAIRR administration and governance should follow a cooperative stewardship model “whereby a single federal agency serves as the administrative home for NAIRR operations and a steering committee comprising principals from federal agencies with equities in AI research drives the strategic direction of the NAIRR.”

The task force has suggested the framework provide a federated mix of computational and data resources, including testbeds, software and testing tools with a support services portal.

The framework should cover the design and implementation of governance processes and safeguards similar to those of existing guidelines (such as the recent one from NIST).

The NAIRR implementation guidelines are expected to be rolled out in four phases: the program launch, an operating entity startup phase, an initial operational capability phase and ongoing operations. The task force estimates the budget for the project at $2.6 billion over six years.

The new guidelines come almost a year after the NAIRR task force published its first assessment of the AI landscape in the U.S.

Paravision shares new AI ethics efforts

Keeping with the framework theme, California-based trusted-AI software maker Paravision says it uses an ethical framework in the development and deployment of software.

Writing in a blog post, the company says the guidelines established by Paravision outline the company’s goals to ensure that “AI is both ethically trained and conscientiously sold.”

The first part of these claims refers to three factors: ensuring sufficient and diverse data is used to create fair models, obtaining necessary data rights and investing in benchmarking.

As for the second part, Paravision says it only sells its AI biometric models that meet its quality standards and only to countries and entities that respect democratic principles.

Further, the company says it maintains a rigorous use case review process to identify business opportunities aligning with its own core values.

The blog post comes months after Paravision launched a new image search engine and improved biometric liveness detection. More recently, the company participated in the International Face Performance Conference (IFPC) in November. The National Artificial Intelligence Research Resource (NAIRR) task force has published new guidelines to strengthen and democratize AI innovation in the U.S.

The task force was formed in 2020 to see if it makes sense to create a national AI intelligence research resource, and if so, identify the best ways to create the resource. Its members are recommending that a resource should be created.

Officials on the task force say the NAIRR should be used to make AI accessible to more researchers in the U.S. It should identify ways to overcome technical barriers that make it impractical to move AI’s immense data files and to share advanced computational power beyond today’s technology nodes and corridors.

“This access divide limits the ability to leverage AI to tackle the big challenges in our society,” reads the latest NAIRR report, and slows progress toward trustworthy AI.

The divide “constrains the diversity of researchers in the field and the breadth of ideas incorporated into AI innovations, contributing to embedded biases and other systemic inequalities found in AI systems today.”

The framework itself should be “broadly accessible to a range of users and provide a platform that can be used for educational and community-building activities in order to lower the barriers to participation in the AI research ecosystem and increase the diversity of AI researchers.”

Task force leaders say that NAIRR administration and governance should follow a cooperative stewardship model “whereby a single federal agency serves as the administrative home for NAIRR operations and a steering committee comprising principals from federal agencies with equities in AI research drives the strategic direction of the NAIRR.”

The task force has suggested the framework provide a federated mix of computational and data resources, including testbeds, software and testing tools with a support services portal.

The framework should cover the design and implementation of governance processes and safeguards similar to those of existing guidelines (such as the recent one from NIST).

The NAIRR implementation guidelines are expected to be rolled out in four phases: the program launch, an operating entity startup phase, an initial operational capability phase and ongoing operations. The task force estimates the budget for the project at $2.6 billion over six years.

The new guidelines come almost a year after the NAIRR task force published its first assessment of the AI landscape in the U.S.
Paravision shares new AI ethics efforts
Keeping with the framework theme, California-based trusted-AI software maker Paravision says it uses an ethical framework in the development and deployment of software.

Writing in a blog post, the company says the guidelines established by Paravision outline the company’s goals to ensure that “AI is both ethically trained and conscientiously sold.”

The first part of these claims refers to three factors: ensuring sufficient and diverse data is used to create fair models, obtaining necessary data rights and investing in benchmarking.

As for the second part, Paravision says it only sells its AI biometric models that meet its quality standards and only to countries and entities that respect democratic principles.

Further, the company says it maintains a rigorous use case review process to identify business opportunities aligning with its own core values.

The blog post comes months after Paravision launched a new image search engine and improved biometric liveness detection. More recently, the company participated in the International Face Performance Conference (IFPC) in November.  Read More   

Generated by Feedzy

Disclaimer

Innov8 is owned and operated by Rolling Rock Ventures. The information on this website is for general information purposes only. Any information obtained from this website should be reviewed with appropriate parties if there is any concern about the details reported herein. Innov8 is not responsible for its contents, accuracies, and any inaccuracies. Nothing on this site should be construed as professional advice for any individual or situation. This website includes information and content from external sites that is attributed accordingly and is not the intellectual property of Innov8. All feeds ("RSS Feed") and/or their contents contain material which is derived in whole or in part from material supplied by third parties and is protected by national and international copyright and trademark laws. The Site processes all information automatically using automated software without any human intervention or screening. Therefore, the Site is not responsible for any (part) of this content. The copyright of the feeds', including pictures and graphics, and its content belongs to its author or publisher.  Views and statements expressed in the content do not necessarily reflect those of Innov8 or its staff. Care and due diligence has been taken to maintain the accuracy of the information provided on this website. However, neither Innov8 nor the owners, attorneys, management, editorial team or any writers or employees are responsible for its content, errors or any consequences arising from use of the information provided on this website. The Site may modify, suspend, or discontinue any aspect of the RSS Feed at any time, including, without limitation, the availability of any Site content.  The User agrees that all RSS Feeds and news articles are for personal use only and that the User may not resell, lease, license, assign, redistribute or otherwise transfer any portion of the RSS Feed without attribution to the Site and to its originating author. The Site does not represent or warrant that every action taken with regard to your account and related activities in connection with the RSS Feed, including, without limitation, the Site Content, will be lawful in any particular jurisdiction. It is incumbent upon the user to know the laws that pertain to you in your jurisdiction and act lawfully at all times when using the RSS Feed, including, without limitation, the Site Content.  

Close Bitnami banner
Bitnami