Trending...
- J French's #1 Album "I Don't Believe in Bad Days" Enters the Grammy Conversation - 106
- Hiller's 2025 Flight Simulator Rally Inspires Golf Sim Revolution in Access and Training
- Impact LA Responds to Food Crisis in Los Angeles
Download
Lama Nachman is an Intel Fellow and director of the Intelligent Systems Research Lab at Intel Labs. (Credit: Intel Corporation)
SANTA CLARA, Calif.--(BUSINESS WIRE)--The following is an opinion editorial by Lama Nachman, Intel Fellow and director of the Intelligent Systems Research Lab at Intel Labs.
Artificial intelligence (AI) has become a key part of everyday life, transforming how we live, work, and solve new and complex challenges. From making voice banking possible for people with neurological conditions to helping autonomous vehicles make roads safer and helping researchers better understand rainfall patterns and human population trends, AI has allowed us to overcome barriers, make societies safer and develop solutions to build a better future.
Despite AI's many real-life benefits, Hollywood loves to tell alarming stories of AI taking on a mind of its own and menacing people. These science fiction scenarios can distract us from the very real but more banal ways in which poorly designed AI systems can harm people. It is critical that we continuously strive to responsibly develop AI technologies so that our efforts do not marginalize people, use data in unethical ways or discriminate against different populations — especially individuals in traditionally underrepresented groups. These are problems that we as developers of AI systems are aware of and are working to prevent.
At Intel, we believe in the potential of AI technology to create positive global change, empower people with the right tools and improve the life of every person on the planet. We've long been recognized as one of the most ethical companies in the world, and we take that responsibility seriously. We've had Global Human Rights Principles in place since 2009 and are committed to high standards in product responsibility, including AI. We recognize the ethical risks associated with the development of AI technology and aspire to be a role model, especially as thousands of companies across all industries are making AI breakthroughs using systems enhanced with Intel® AI technology.
More on The Californer
We are committed to responsibly advancing AI technology throughout the product lifecycle. I am excited to share our updated Responsible AI web page, featuring the work we do in this space and highlighting the actions we are taking to operate responsibly, guard against the misuse of AI and keep ourselves accountable through internal oversight and governance processes.
Review Process
Our multidisciplinary Responsible AI Advisory Council conducts a rigorous review process throughout the lifecycle of an AI project. The council reviews product and project development with our ethical impact assessment through the lens of six key areas: human rights; human oversight; explainable use of AI; security, safety and reliability; personal privacy; and equity and inclusion. The goal is to assess potential ethical risks within AI projects and mitigate those risks as early as possible. Council members also provide training, feedback and support to the development teams to ensure consistency and compliance across Intel.
Diversity and Inclusion
Bias in AI algorithms can't be addressed with tools and processes alone. AI technology must be shaped by people with diverse backgrounds, voices and experiences. Intel strives to ensure AI practitioners and their respective technologies are equitable and inclusive. Diverse teams can offer perspectives and raise concerns that may be missed by more homogeneous teams. We rely extensively on social science research to understand situations in which bias is likely to occur in datasets, problem formulation or modelling, as well as situations where unanticipated risks may lead to harm in real-world integration.
We also recognize the need to include ethics as a core part of any AI education program. Our digital readiness programs, such as our AI for Future Workforce Program, engage students in the principles of AI ethics and help them understand how to create responsible AI solutions. We also actively work with community colleges, which attract students with a rich variety of backgrounds and expertise. And we continually seek new ways to engage with people from all walks of life who are affected by new technologies such as AI.
Privacy and Security
Securing AI and maintaining data integrity, privacy and accuracy is at the heart of Intel's security research and development efforts. At Intel, we approach these issues holistically and develop innovations across hardware and software to enable the ecosystem to build trustworthy AI. For example, our research at the Private AI institute (https://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fprivate-ai.org%2F&esheet=52881802&newsitemid=20220912005229&lan=en-US&anchor=Private+AI+institute&index=8&md5=dbada14b816d4356810fbd3eb55c3bf0) aims to inform people about ways they can better safeguard their private information. Our federated learning and openFL efforts (https://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fgithub.com%2Fintel%2Fopenfl&esheet=52881802&newsitemid=20220912005229&lan=en-US&anchor=federated+learning+and+openFL+efforts&index=9&md5=3fa83c744f9058d9264dc3a3a038aa37) focus on how to make useful AI from sensitive data. Our DARPA GARD project (https://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fwww.intel.com%2Fcontent%2Fwww%2Fus%2Fen%2Fnewsroom%2Fnews%2Fintel-joins-georgia-tech-darpa-program-mitigate-machine-learning-deception-attacks.html&esheet=52881802&newsitemid=20220912005229&lan=en-US&anchor=DARPA+GARD+project&index=10&md5=559db46de9d0da5cb67c77e45c3d590f) examines ways to verify that AI won't be tampered with. Finally, Project Amber (https://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fwww.intel.com%2Fcontent%2Fwww%2Fus%2Fen%2Fnewsroom%2Fnews%2Fvision-2022-project-amber-security.html&esheet=52881802&newsitemid=20220912005229&lan=en-US&anchor=Project+Amber&index=11&md5=6d0074cf257309685e333f80627fc269) provides customers and partners a way to better trust infrastructure from edge to cloud.
More on The Californer
Recently, we have also seen many ethical concerns raised around deepfakes. We have been exploring how to incorporate deepfake detection technology and tools for determining the original sources of information into Intel products and how customers can integrate these technologies into their platforms. Two initial research areas include deepfake detection (https://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fwww.intel.com%2Fcontent%2Fwww%2Fus%2Fen%2Fresearch%2Fblogs%2Ftrusted-media.html&esheet=52881802&newsitemid=20220912005229&lan=en-US&anchor=deepfake+detection&index=12&md5=7da3931610cda6ac7aaf37f42f38822e), or the production or modification of fake media using machine learning and AI, and media authentication technology to confirm the validity of the content.
Collaboration
Developments in this rapidly evolving field affect our ecosystem of partners, the industry as a whole and the world. That is why we continue to invest in critical research and work with academic partners in areas such as privacy, security, human/AI collaboration and sustainability. We believe that unlocking the potential in human-AI collaboration can lead to an exciting future for AI. We are developing the capabilities needed to make this happen and applying them internally in our design and manufacturing processes. We strive to be transparent about our position and practices so we can address shared challenges and improve our products and the overall industry. We actively engage in forums like the Roundtable on Human Rights & AI (https://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fwww.articleoneadvisors.com%2Fbusiness-roundtable-on-human-rights-ai&esheet=52881802&newsitemid=20220912005229&lan=en-US&anchor=Roundtable+on+Human+Rights+%26amp%3B+AI&index=13&md5=2e8e8b43dd2ae30622214cbd38e78804), Global Business Initiative on Human Rights (https://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fgbihr.org%2F&esheet=52881802&newsitemid=20220912005229&lan=en-US&anchor=Global+Business+Initiative+on+Human+Rights&index=14&md5=b996e5293ee3c5ace8265fc2e9309dd0), Partnership on AI (https://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fpartnershiponai.org%2F&esheet=52881802&newsitemid=20220912005229&lan=en-US&anchor=Partnership+on+AI&index=15&md5=40d0d962718cbcdfea8d242e58986eb4) and the Pledge to Build Gender-Fair AI (https://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fwww.interelles.com%2Fwp-content%2Fuploads%2F2021%2F06%2FPacte-Femmes-IA-EN-2021-def35348.pdf&esheet=52881802&newsitemid=20220912005229&lan=en-US&anchor=Pledge+to+Build+Gender-Fair+AI&index=16&md5=77df02b8de86f9a57f160942493c77ee) to learn from our peers and establish ethical, moral and privacy parameters so we can build a thriving AI business.
Looking Ahead
AI has come a long way but there is still so much more to be discovered. We are continually finding ways to use this technology to drive positive change and better mitigate risks. At Intel, we are committed to deepening our knowledge using a multidisciplinary approach and focusing on amplifying human potentiation with AI through human-AI collaboration.
I look forward to seeing how we as a company and the industry at large can continue to work together to unleash the positive power of AI.
About Intel
Intel (Nasdaq: INTC) is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moore's Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers' greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better. To learn more about Intel's innovations, go to newsroom.intel.com (https://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fnewsroom.intel.com%2F&esheet=52881802&newsitemid=20220912005229&lan=en-US&anchor=newsroom.intel.com&index=17&md5=d9ff49b2a4ac6f386305cfa97b24ce20) and intel.com (https://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fintel.com%2F&esheet=52881802&newsitemid=20220912005229&lan=en-US&anchor=intel.com&index=18&md5=9be2019b85828efa9d77d12aeb50f64b).
© Intel Corporation. Intel, the Intel logo and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.
Contacts
Orly Shapiro
1-949-231-0897
orly.shapiro@intel.com
Release Summary
In an editorial, Lama Nachman writes that Intel is responsibly leveraging artificial intelligence to mitigate risks and drive positive change.
0) { // Create container for hi-res image jQuery('#bw-release-hires').append('
'); }; }); ]]>
Social Media Profiles
Contacts
Orly Shapiro
1-949-231-0897
orly.shapiro@intel.com
Lama Nachman is an Intel Fellow and director of the Intelligent Systems Research Lab at Intel Labs. (Credit: Intel Corporation)
- Lama Nachman is an Intel Fellow and director of the Intelligent Systems Research Lab at Intel Labs. (Credit: Intel Corporation)
SANTA CLARA, Calif.--(BUSINESS WIRE)--The following is an opinion editorial by Lama Nachman, Intel Fellow and director of the Intelligent Systems Research Lab at Intel Labs.
Artificial intelligence (AI) has become a key part of everyday life, transforming how we live, work, and solve new and complex challenges. From making voice banking possible for people with neurological conditions to helping autonomous vehicles make roads safer and helping researchers better understand rainfall patterns and human population trends, AI has allowed us to overcome barriers, make societies safer and develop solutions to build a better future.
Despite AI's many real-life benefits, Hollywood loves to tell alarming stories of AI taking on a mind of its own and menacing people. These science fiction scenarios can distract us from the very real but more banal ways in which poorly designed AI systems can harm people. It is critical that we continuously strive to responsibly develop AI technologies so that our efforts do not marginalize people, use data in unethical ways or discriminate against different populations — especially individuals in traditionally underrepresented groups. These are problems that we as developers of AI systems are aware of and are working to prevent.
At Intel, we believe in the potential of AI technology to create positive global change, empower people with the right tools and improve the life of every person on the planet. We've long been recognized as one of the most ethical companies in the world, and we take that responsibility seriously. We've had Global Human Rights Principles in place since 2009 and are committed to high standards in product responsibility, including AI. We recognize the ethical risks associated with the development of AI technology and aspire to be a role model, especially as thousands of companies across all industries are making AI breakthroughs using systems enhanced with Intel® AI technology.
More on The Californer
- Axiros North America Announces New CEO: Gabriel Davidov
- CCHR Exposes Harms Behind Today's Mental Health Awareness Campaigns
- Now Is the Right Time: Kaltra Highlights Its Proven Replacement Microchannel Coils
- Celebrating Rosie Perez's Remarkable Career: 'A Hot Set's Top 10 Must-See Films
- Chill Factor Cooling & Heating Honors Local Heroes with 10% Off for First Responders
We are committed to responsibly advancing AI technology throughout the product lifecycle. I am excited to share our updated Responsible AI web page, featuring the work we do in this space and highlighting the actions we are taking to operate responsibly, guard against the misuse of AI and keep ourselves accountable through internal oversight and governance processes.
Review Process
Our multidisciplinary Responsible AI Advisory Council conducts a rigorous review process throughout the lifecycle of an AI project. The council reviews product and project development with our ethical impact assessment through the lens of six key areas: human rights; human oversight; explainable use of AI; security, safety and reliability; personal privacy; and equity and inclusion. The goal is to assess potential ethical risks within AI projects and mitigate those risks as early as possible. Council members also provide training, feedback and support to the development teams to ensure consistency and compliance across Intel.
Diversity and Inclusion
Bias in AI algorithms can't be addressed with tools and processes alone. AI technology must be shaped by people with diverse backgrounds, voices and experiences. Intel strives to ensure AI practitioners and their respective technologies are equitable and inclusive. Diverse teams can offer perspectives and raise concerns that may be missed by more homogeneous teams. We rely extensively on social science research to understand situations in which bias is likely to occur in datasets, problem formulation or modelling, as well as situations where unanticipated risks may lead to harm in real-world integration.
We also recognize the need to include ethics as a core part of any AI education program. Our digital readiness programs, such as our AI for Future Workforce Program, engage students in the principles of AI ethics and help them understand how to create responsible AI solutions. We also actively work with community colleges, which attract students with a rich variety of backgrounds and expertise. And we continually seek new ways to engage with people from all walks of life who are affected by new technologies such as AI.
Privacy and Security
Securing AI and maintaining data integrity, privacy and accuracy is at the heart of Intel's security research and development efforts. At Intel, we approach these issues holistically and develop innovations across hardware and software to enable the ecosystem to build trustworthy AI. For example, our research at the Private AI institute (https://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fprivate-ai.org%2F&esheet=52881802&newsitemid=20220912005229&lan=en-US&anchor=Private+AI+institute&index=8&md5=dbada14b816d4356810fbd3eb55c3bf0) aims to inform people about ways they can better safeguard their private information. Our federated learning and openFL efforts (https://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fgithub.com%2Fintel%2Fopenfl&esheet=52881802&newsitemid=20220912005229&lan=en-US&anchor=federated+learning+and+openFL+efforts&index=9&md5=3fa83c744f9058d9264dc3a3a038aa37) focus on how to make useful AI from sensitive data. Our DARPA GARD project (https://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fwww.intel.com%2Fcontent%2Fwww%2Fus%2Fen%2Fnewsroom%2Fnews%2Fintel-joins-georgia-tech-darpa-program-mitigate-machine-learning-deception-attacks.html&esheet=52881802&newsitemid=20220912005229&lan=en-US&anchor=DARPA+GARD+project&index=10&md5=559db46de9d0da5cb67c77e45c3d590f) examines ways to verify that AI won't be tampered with. Finally, Project Amber (https://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fwww.intel.com%2Fcontent%2Fwww%2Fus%2Fen%2Fnewsroom%2Fnews%2Fvision-2022-project-amber-security.html&esheet=52881802&newsitemid=20220912005229&lan=en-US&anchor=Project+Amber&index=11&md5=6d0074cf257309685e333f80627fc269) provides customers and partners a way to better trust infrastructure from edge to cloud.
More on The Californer
- Stockholder Challenges Luminar's Title to Solfice IP; §220 Action Alleges Undisclosed Inducements
- PHOTOS: Humanitarian deployment of California National Guard, California Volunteers to food banks expands to San Diego
- How to Optimize Your Website for AI Search with DeepRank AI
- New Free Science Bingo Cards Help Grade 1 Students Learn Through Play
- California: CalGuard sees 240% increase in fentanyl seized since June, after most of the National Guard was returned to the Governor's control
Recently, we have also seen many ethical concerns raised around deepfakes. We have been exploring how to incorporate deepfake detection technology and tools for determining the original sources of information into Intel products and how customers can integrate these technologies into their platforms. Two initial research areas include deepfake detection (https://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fwww.intel.com%2Fcontent%2Fwww%2Fus%2Fen%2Fresearch%2Fblogs%2Ftrusted-media.html&esheet=52881802&newsitemid=20220912005229&lan=en-US&anchor=deepfake+detection&index=12&md5=7da3931610cda6ac7aaf37f42f38822e), or the production or modification of fake media using machine learning and AI, and media authentication technology to confirm the validity of the content.
Collaboration
Developments in this rapidly evolving field affect our ecosystem of partners, the industry as a whole and the world. That is why we continue to invest in critical research and work with academic partners in areas such as privacy, security, human/AI collaboration and sustainability. We believe that unlocking the potential in human-AI collaboration can lead to an exciting future for AI. We are developing the capabilities needed to make this happen and applying them internally in our design and manufacturing processes. We strive to be transparent about our position and practices so we can address shared challenges and improve our products and the overall industry. We actively engage in forums like the Roundtable on Human Rights & AI (https://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fwww.articleoneadvisors.com%2Fbusiness-roundtable-on-human-rights-ai&esheet=52881802&newsitemid=20220912005229&lan=en-US&anchor=Roundtable+on+Human+Rights+%26amp%3B+AI&index=13&md5=2e8e8b43dd2ae30622214cbd38e78804), Global Business Initiative on Human Rights (https://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fgbihr.org%2F&esheet=52881802&newsitemid=20220912005229&lan=en-US&anchor=Global+Business+Initiative+on+Human+Rights&index=14&md5=b996e5293ee3c5ace8265fc2e9309dd0), Partnership on AI (https://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fpartnershiponai.org%2F&esheet=52881802&newsitemid=20220912005229&lan=en-US&anchor=Partnership+on+AI&index=15&md5=40d0d962718cbcdfea8d242e58986eb4) and the Pledge to Build Gender-Fair AI (https://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fwww.interelles.com%2Fwp-content%2Fuploads%2F2021%2F06%2FPacte-Femmes-IA-EN-2021-def35348.pdf&esheet=52881802&newsitemid=20220912005229&lan=en-US&anchor=Pledge+to+Build+Gender-Fair+AI&index=16&md5=77df02b8de86f9a57f160942493c77ee) to learn from our peers and establish ethical, moral and privacy parameters so we can build a thriving AI business.
Looking Ahead
AI has come a long way but there is still so much more to be discovered. We are continually finding ways to use this technology to drive positive change and better mitigate risks. At Intel, we are committed to deepening our knowledge using a multidisciplinary approach and focusing on amplifying human potentiation with AI through human-AI collaboration.
I look forward to seeing how we as a company and the industry at large can continue to work together to unleash the positive power of AI.
About Intel
Intel (Nasdaq: INTC) is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moore's Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers' greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better. To learn more about Intel's innovations, go to newsroom.intel.com (https://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fnewsroom.intel.com%2F&esheet=52881802&newsitemid=20220912005229&lan=en-US&anchor=newsroom.intel.com&index=17&md5=d9ff49b2a4ac6f386305cfa97b24ce20) and intel.com (https://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fintel.com%2F&esheet=52881802&newsitemid=20220912005229&lan=en-US&anchor=intel.com&index=18&md5=9be2019b85828efa9d77d12aeb50f64b).
© Intel Corporation. Intel, the Intel logo and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.
Contacts
Orly Shapiro
1-949-231-0897
orly.shapiro@intel.com
Release Summary
In an editorial, Lama Nachman writes that Intel is responsibly leveraging artificial intelligence to mitigate risks and drive positive change.
0) { // Create container for hi-res image jQuery('#bw-release-hires').append('
'); }; }); ]]>
Social Media Profiles
- @IntelNews (https://twitter.com/intelnews)
Contacts
Orly Shapiro
1-949-231-0897
orly.shapiro@intel.com
Filed Under: Business
0 Comments
Latest on The Californer
- California: Governor Newsom announces judicial appointments 10.31.2025
- FreeFast.Food Steps Up for 42 Million SNAP/EBT Users with Free Tacos and Burritos Nationwide
- California: NO TREATS, ALL TRICKS: The Trump Administration is killing the economy
- Ascend in Motion Expands Flat-Rate Coverage to Anaheim
- Attorney Credits Launches New CLE Course: "Mastering Reptile Tactics" with Kate Whitlock, Esq
- Divine Punk Announces Happy Christmas, a Holiday Soundscape by Rebecca Noelle
- $430 Million 2026 Revenue Forecast; 26% Organic Growth; $500,000 Stock Dividend Highlight a Powerful AI & Digital Transformation Story: IQSTEL $IQST
- Ascend in Motion Expands Flat-Rate Coverage Across Los Angeles
- Wzzph Deploys 5-Million-TPS Trading Engine with Hot-Cold Wallet Architecture Serving 500,000 Active Users Across Latin America
- Preston Dermatology & Skin Surgery Center and Dr. Sheel Desai Solomon Dominate Raleigh's Best Awards from The News & Observer
- $73.6 Million Multi-Year Backlog and Florida State Term Contract Drive Momentum for AI-Cybersecurity Pioneer: Cycurion, Inc. (N A S D A Q: CYCU) $CYCU
- Year-Round Deals for Customers With Square Signs
- Liexs Digital Asset Center Launches Global Connectivity Initiative
- ProfitShock Investment Alliance Led by Ethan Mercer Launches Green Intelligence Initiative
- Aartha AI Launches Predictive Churn Modeling to Help Agencies and SaaS Companies
- GP Tha Boss Presents: Gang Gang LA
- Community Seeks Justice 40 Years After Timothy Charles Lee's Death with Memorial Walk
- SecurePII Raises US$3.5M (A$5M) to Unlock AI and Compliance for Voice Data and Expands Global Presence
- California: Governor Newsom announces appointments 10.30.2025
- California deploys search and rescue personnel to Jamaica following Hurricane Melissa