AI Governance

Laying the Foundation for AI

To implement responsible AI in higher education, New York Tech has developed core principles to guide our decisions, policies, and deployment, ensuring ethical standards and effective integration. These principles form the cornerstone for all AI activities at New York Tech, ensuring that AI integration prioritizes ethical considerations and upholds institutional integrity.

Our Core AI Principles

Fairness and Reliability: AI systems should be developed to minimize bias and guarantee outcomes that are consistent, valid, and equitable.

Human Oversight and Value Alignment: AI should support—not replace—human judgment, especially in legal or ethical contexts, and must reflect New York Tech’s values.

Transparency and Explainability: Users must be informed when AI is utilized, understand its mechanisms, and interpret its results clearly.

Privacy, Security, and Safety: AI systems must protect user data, ensure security, and mitigate risks that could compromise institutional or personal safety.

Accountability: Institutions and AI providers must set clear frameworks to ensure ethical oversight and responsible use.

AI Policies and Guidelines

Employee Guidelines for Use of GenAI

New York Tech has established the following guidelines to help employees use Generative AI tools responsibly and ethically, while protecting our intellectual property and institutional data. 

Academic Integrity Policy: U.S. Campuses

New York Tech’s Academic Integrity Policy includes a section on unauthorized uses of Generative AI, such as ChatGPT, Sudowrite, Gemini, and more.

AI Leadership at New York Tech

Our AI Executive Steering Committee, composed of experts and enthusiasts from across the university, is dedicated to creating a vibrant environment that supports cutting-edge research and promotes interdisciplinary collaboration. We are excited to embark on this journey and look forward to the future advancements that AI will bring to our institution and beyond.

Executive Steering Committee