Guru's Verification engine ensures consistency, confidence, and trust in the knowledge your organization shares. Learn more.

Generative AI at The New School

Artificial intelligence (AI) refers to software or machines that exhibit abilities normally associated with human intelligence, such as understanding natural language, recognizing patterns, making decisions, and solving complex problems. Generative AI, or generative artificial intelligence, is a type of AI that can create new content such as text, images, audio, or video. It uses machine learning models to learn patterns from data and then predict what would come next in a pattern to create new content. Examples of generative AI tools include ChatGPT, MidJourney, Gemini, and Microsoft Copilot.

AI tools can help us do our jobs better and more efficiently. They can automate workflows, capture transcripts of conversations, generate instructions, enhance images and visual aides, and increase effectiveness and productivity. But, as we use these new tools, we must be aware of the inherent risks and understand our responsibilities when using them. The below provides guidance on identifying, selecting, and using AI tools responsibly at The New School and promoting adherence to ethical and contractual responsibilities.


Using AI Tools

When incorporating the use of AI in your administrative work, you must use only approved AI tools, and be extra mindful of the information you enter into the tool.

If you plan to upload TNS datasets to an AI, you must contact the Information Security and Privacy Office at ispo@newschool.edu before doing so. Generally, tools governed by policies that suggest relinquishing ownership of data or that state that data will be used to teach or train machine learning products should not be used.

Do not use AI products that are not certified as commercially safe for final work (the AI tools must be trained on royalty-free data). All content—whether photo, video, graphic, or written—must comply with legal policies of The New School. Be sure to review the privacy, data usage and data security policies of companies, tools, and products using AI before transmitting data to them.

Reminder: AI Tools can serve as a great starting point for research and ideation, but their output should never be a final product. You are ultimately responsible for everything you create with AI. Please review and revise your output accordingly.

Identify Tools Using AI

Tools using AI operate by perpetually learning from data provided and the queries that are entered. Users must be aware of the sensitivity and confidentiality of the data provided to these tools and learn to recognize when they are using tools employing machine learning.

The following features signal the use of such tools:

  • Terms like automation, generative, generation, machine learning, intelligent solutions, powered by OpenAI/DALLE, and personalized recommendations
  • Chat or “assistance” features that accept prompts with the promise to generate answers or data (e.g., FAQ chatbots and conversational tools like ChatGPT and Gemini)
  • Text entry prompts resulting in the generation or transformation of a visual or auditory product or of elements of an existing document (e.g., Add glasses to my face; Remove all trees in the background; Paint a picture of a beach in the style of Van Gogh; Produce a 140-character summary of this text; Change this voice to sound like Mariah Carey)
  • Automatically produced analysis or summaries of webpages, meetings, conversations, articles, general information datasets or documents
  • Automatically produced bullet-point lists summarizing long-form text or describing a video
  • Predictive text or information based on data streams that have been provided or connected previously (e.g., This response was generated based on previous replies to this user or Description auto-completed based on previous entries)

Approved AI Tools

Coming soon.

Request Approval or Evaluation of AI Tools

Contact the designated offices below to request approval or evaluation of AI tools. The appropriate IT team will coordinate assessment of the platform, controls available to protect security and privacy, and features that may need to be removed, added, or disabled.

  • Submit a ticket to IT Central at ITCentral@newschool.edu for the following:
    • Requests for AI projects and/or services
    • If you discover that AI features have been added to/turned on in a product already in use (for example, recording, note taking, or transcription features)
  • Contact the Information Security & Privacy Office (ISPO) at ispo@newschool.edu with questions or to set up a meeting to discuss:
    • Acquiring/purchasing AI tools (systems that will use AI must undergo a review)
    • Sharing sensitive data with third parties, including entering it into AI tools

Report an Incident

You may find that you have inadvertently added personal, sensitive, or confidential data into an AI tool. This constitutes an information security incident or data leak and you will need to report it in the same way as any other by contacting IT Central at ITCentral@newschool.edu.

If you run a query and discover that New School institutional information is present in any AI system, report the discovery to IT Central at ITCentral@newschool.edu.


Best Practices for Responsible Use of AI

Do

  1. Follow existing university policies and standards, including the Acceptable Use Policy, Intellectual Property Rights, and the Protection Level Classification Guide, when processing any kind of Institutional Information or accessing IT Resources. AI tools should not be used in any way that would violate existing university standards or policies.
  2. Protect The New School’s data. Treat any information you feed into an AI tool, such as ChatGPT, as if it were being distributed to the public. Do not share any information that is confidential, business strategic, or proprietary intellectual property. Anonymize personal data/personally identifiable information and if possible, use settings that ensure inputs are not retained by the AI.
  3. Carefully review output from AI tools and be aware that responses sometimes contain subtle but meaningful hallucinations (responses that contain false or misleading information presented as fact), uncited intellectual property, factual errors and biased or inappropriate statements. Always use your judgment when analyzing AI responses.
  4. Be transparent about AI usage. Acknowledge when using AI to produce content and provide attribution when you do.
  5. Respect people’s privacy. If you're using AI technology to summarize and transcribe meetings or forums attended by multiple individuals, consent must be obtained from all participants. Transcripts and summaries produced by AI should be evaluated and reviewed for accuracy by the individual responsible for enabling the AI agent.
  6. Stay up-to-date with developments in AI. Notify ISPO if new AI features are incorporated into the products we use.

Don't

  1. Don't use your TNS password when signing up for an AI tool and do not re-use the same login credentials or passwords across multiple tools.
  2. Do not enter any data into any AI tool or service, whether internally developed or from a third party that falls into the moderate or high-risk data categories of the Protection Level Classification Guide.
  3. Do not enter sensitive data into any third party or non-vetted AI tool. Entering sensitive data can put the university at risk for violating regulatory, contractual, or legal obligations. Examples include: HR data that identifies an individual, FERPA-protected student information, and medical history.
  4. Do not enter proprietary information into an AI tool. Be aware that outputs may contain unattributed and unauthorized copyrighted information, which may violate intellectual property and confidentiality laws.

Additional Guidelines for Use of AI

AI Notetaking Applications

Note-taking and transcription services (including Fathom and Otter.ai, among others) may be helpful for documenting meetings but these bots are not a substitute for your attendance and human interaction. Review guidance on using AI Notetaking Applications to ensure you're in compliance of both privacy and information security guidelines.

See AI Transcription & Note-Taking Services for additional information.

Generative Artificial Intelligence in Teaching and Learning

The Generative Artificial Intelligence section within the Guide to Teaching and Learning provides a comprehensive overview of the use of ChatGPT and other Generative AI Tools for instruction and student learning as well as helpful resources to assist faculty as they encounter and incorporate these tools throughout the course of the academic year. Resources include sample syllabus statements, crafting effective prompts, and a guide for using AI Tools in Student Assignments.

Review comprehensive guidelines on the Guide to Teaching and Learning.

AI Usage in Marketing & Communications

The Marketing & Communication Department Guidelines on AI usage have been researched, designed, and written for use by those who do the work of our department. This includes department members as well as administrators, staff, educators, and vendors responsible for creating marketing and communication materials that align with the university's brand identity.

These guidelines on our Tools & Training website can serve as a resource for navigating the rapidly evolving landscape of AI technology as it applies to marketing and communication at The New School.

You must have Author or Collection Owner permission to create Guru Cards. Contact your team's Guru admins to use this template.