AI Applications We Will Not Pursue
Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
Technologies that gather or use information for surveillance violating internationally accepted norms.
Technologies whose purpose contravenes widely accepted principles of international law and human rights.
Conclusion: We believe these principles are the right foundation for our company and our future development of AI. We acknowledge that this area is...
Show More
AI Applications We Will Not Pursue
Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
Technologies that gather or use information for surveillance violating internationally accepted norms.
Technologies whose purpose contravenes widely accepted principles of international law and human rights.
Conclusion: We believe these principles are the right foundation for our company and our future development of AI. We acknowledge that this area is dynamic and evolving, and we will approach our work with humility, a commitment to internal and external engagement, and a willingness to adapt our approach as we learn over time.
Show Less
No comments yet. Be the first to comment!