Can Dental Teams Use ChatGPT With Patient Information? Please Don’t Paste Mrs. Jones’ Chart
May 04, 2026
Can Dental Teams Use ChatGPT With Patient Information? Please Don’t Paste Mrs. Jones’ Chart
This is Part 2 of Dental AI Essentials: A Practical Series for Dental Teams.
In Part 1, we looked at what AI can and cannot do in dentistry, and why it should be treated like a helpful but overconfident intern: useful, fast, and still in need of supervision.
Now we need to talk about one of the most important AI rules for dental teams:
Do not put patient information into an AI tool unless your practice has reviewed and approved that tool and that specific use.
Imagine a team member is trying to save time at the end of a busy day.
They have a chart note that needs to be turned into a clearer insurance narrative. The AI tool is open in another browser tab. The note is already written. Copy, paste, rewrite. Easy.
Except that note may include patient identifiers, treatment details, dates, provider information, or other details that could identify the patient.
That does not mean dental teams should be afraid of AI.
It means privacy needs to sit in the chair first.
Can Dental Teams Use ChatGPT With Patient Information?
Usually, no.
Dental teams should not enter patient information into ChatGPT or any other AI tool unless the dental practice has reviewed and approved the tool, the purpose, the privacy terms, the safeguards, and the workflow.
This includes chart notes, treatment details, appointment information, radiograph summaries, insurance details, patient messages, and any other information that could identify a patient.
The safer starting point is to use AI only with general, non-identifying prompts unless the practice has approved a specific tool and process for patient-related information.
Quick Answer
Dental teams should not use ChatGPT or other AI tools with patient information unless the practice has approved the tool, reviewed the privacy and security terms, confirmed the workflow, and trained the team on safe use.
That includes obvious details such as names, chart notes, radiograph reports, appointment details, and treatment plans. It may also include details that seem harmless on their own but could identify a patient when combined with other information.
In Canada, privacy obligations may come from federal private-sector privacy law, provincial private-sector privacy law, provincial health privacy law, dental regulatory expectations, or a combination of these. The exact obligations depend on the province, the practice, and the type of information involved.
The practical rule is simple:
If you would not casually email it to a random vendor, do not casually paste it into an AI prompt.
What Counts as Patient Information in a Dental AI Prompt?
Patient information is not limited to a name.
In a dental practice, patient information may include details such as:
- Chart notes
- Treatment plans
- Radiograph summaries
- Medical history
- Medication information
- Appointment dates
- Insurance details
- Payment or financial information
- Referral information
- Patient messages
- Age, location, or unusual clinical circumstances
Even when a name is removed, a patient may still be identifiable if several details are combined.
A safer AI prompt uses general, non-identifying information.
For example:
“Write a plain-language explanation of why a dental crown may be recommended after root canal treatment. Do not include patient-specific advice.”
This gives the team useful support without exposing patient information.
Why AI Privacy Matters in Dental Practices
Dental records are not ordinary business notes.
They can include health history, medications, treatment plans, radiographs, financial details, insurance information, contact information, and sensitive personal circumstances.
When a team member enters information into an AI tool, the practice needs to understand what happens next.
Important questions include:
- Is the information stored?
- Can the vendor access it?
- Is it used to train or improve the system?
- Where is the data processed or stored?
- Can it be deleted?
- Is there a suitable agreement in place?
- Are the safeguards appropriate for health information?
- Does the use match the purpose for which the patient information was collected?
The Office of the Privacy Commissioner of Canada explains that PIPEDA includes fair information principles such as accountability, consent, limiting collection, limiting use, disclosure and retention, safeguards, and openness. Those principles are a useful lens when dental practices are reviewing AI use involving personal information.
Canadian privacy regulators have also issued guidance for generative AI that emphasizes legal authority and consent, openness, safeguards, and limiting the sharing of personal, sensitive, or confidential information.
If the practice cannot answer basic questions about where patient information goes, who can access it, and how it is protected, the practice is not ready to use that tool with identifiable patient information.
“But I Removed the Name” May Not Be Enough
Removing a patient’s name is a good start, but it does not always make the information safe to share.
A patient may still be identifiable through a combination of details, such as:
- Appointment date
- Unusual treatment history
- Location
- Age
- Medical history
- Insurance details
- Clinical circumstances
- Provider or referral details
This is especially important in smaller communities, specialty cases, or unusual clinical situations.
When in doubt, use general, non-identifying information instead.
For example, instead of pasting a patient’s chart note into an AI tool, a team member could ask:
“Write a plain-language explanation of why a dental crown may be recommended after a root canal. Do not include patient-specific advice.”
That kind of prompt can still be useful without exposing patient information.
What Canadian Dental Practices Need to Consider
Canada’s privacy landscape is not one-size-fits-all.
Depending on the province and the practice, obligations may come from federal private-sector privacy law, provincial private-sector privacy law, provincial health privacy law, or dental regulatory guidance.
For example, PIPEDA sets out fair information principles for many private-sector organizations, including accountability, consent, safeguards, and limiting collection, use, disclosure, and retention.
In Ontario, PHIPA sets rules for the collection, use, and disclosure of personal health information by health information custodians and others who receive personal health information from them.
In Alberta, the Health Information Act includes breach notification requirements for custodians where there is a risk of harm from the loss, unauthorized access, or unauthorized disclosure of individually identifying health information.
The details vary by province.
The principle does not:
Dental practices need to know how patient information is collected, used, disclosed, stored, protected, and transferred.
AI does not remove that responsibility. It makes the questions more immediate.
The Tool Selection Choice Matters
Not all AI tools are built for the same purpose.
A free public AI tool, a consumer chatbot, an enterprise AI platform, and a dental-specific software product may have very different terms, safeguards, access controls, retention practices, and data-use policies.
Before approving an AI tool, a dental practice should review:
- Purpose: What will the tool be used for?
- Data: What information will staff enter?
- Privacy terms: Is input stored, reviewed, or used for training?
- Location: Where is data processed or stored?
- Access: Who can access the information?
- Security: What safeguards are in place?
- Contract: Is there an agreement that fits the practice’s obligations?
- Accountability: Who in the practice is responsible for approval and oversight?
This does not mean every practice needs a fifty-page AI policy before anyone drafts a lunch-and-learn email.
It does mean patient information deserves a higher bar than convenience.
A Practical Dental Scenario
A practice manager wants help rewriting a chart note into a cleaner insurance narrative.
The quick version is tempting:
Copy. Paste. Rewrite. Done.
But if the note includes patient identifiers, treatment details, dates, provider information, or sensitive details, that simple action may create privacy risk.
A safer process would be:
- Use an approved tool only.
- Remove identifying information wherever possible.
- Use general facts instead of full chart notes.
- Avoid entering sensitive details unless the tool has been reviewed and approved for that purpose.
- Have the final narrative reviewed by the appropriate team member.
- Keep documentation consistent with practice policy.
AI can help with wording.
It should not become an invisible shortcut around privacy review.
What Dental Practices Should Do Next
Start with a simple AI privacy rule:
No patient information goes into AI tools unless the tool and its use have been approved by the practice.
Then build from there.
A practical next step is to create a short AI-use checklist for your team:
- Is this tool approved?
- Am I entering patient information?
- Could the patient be identified from the details?
- Do I know where the information goes?
- Does this use match our privacy obligations and practice policy?
- Does a trained person need to review the output?
- Do we need to document this use?
The checklist does not need to be complicated.
It needs to be clear enough that a busy team member can use it before clicking submit.
How Myla Helps
For dental teams that want clear, practical guidance, Myla’s dental AI training helps practices understand AI risks, privacy considerations, safer workflows, and team responsibilities.
This is where training makes the difference.
A written rule helps. A trained team is more likely to use the rule when the day gets busy, the message feels urgent, and the AI tool is sitting one browser tab away.
Frequently Asked Questions
Can dental teams use ChatGPT or other AI tools with patient information?
Only if the practice has reviewed and approved the tool and the specific use. The practice should understand privacy, consent, storage, access, retention, data-use terms, and applicable provincial or federal privacy obligations before patient information is entered.
Is it safe if we remove the patient’s name?
Not always. A patient may still be identifiable through a combination of details such as dates, treatment history, location, age, medical details, referral information, or unusual circumstances.
Do Canadian privacy laws apply to AI use in dental practices?
Yes. Depending on the province, the type of practice, and the type of information, federal and provincial privacy laws may apply. Canadian privacy regulators have also published generative AI guidance that connects AI use with privacy principles such as legal authority, consent, openness, safeguards, and limiting the sharing of sensitive information.
Can AI help with patient education without privacy risk?
Yes, when it is used with general, non-identifying prompts and the output is reviewed by the dental team before use. A safer approach is to ask for general explanations, not patient-specific advice based on identifiable patient information.
What is the safest way for dental teams to use AI?
The safest starting point is to use AI for general, non-identifying support. For example, dental teams can ask AI to draft general patient education wording, checklist ideas, meeting agendas, or plain-language explanations. They should avoid entering chart notes, patient messages, treatment details, or other identifiable information unless the tool and use have been approved by the practice.
What should a dental practice do first?
Create a simple rule: no patient information goes into AI tools unless the tool and use have been approved. Then train the team on what counts as patient information and how to use AI safely.
Final Thoughts
AI can help dental teams communicate more clearly, draft faster, and simplify routine work.
But patient information still needs careful handling.
Before anyone pastes a chart note, treatment detail, radiograph summary, or patient message into an AI tool, the practice needs to understand the privacy implications. That includes consent, safeguards, tool terms, data storage, access, and whether the use is appropriate under applicable Canadian privacy obligations.
The safest starting point is simple:
Use AI for general support. Do not use identifiable patient information unless the tool and process have been properly reviewed and approved.
In Part 3 of Dental AI Essentials: A Practical Series for Dental Teams, we will look at another major AI risk:
Dental AI Essentials: The AI Said What? Hallucinations in Dental Practice
We will cover why AI can sound confident even when it is wrong, and how dental teams can review AI-generated work before it creates confusion, documentation problems, or patient communication issues.
About Anne Genge
Anne Genge is the founder of Myla Training Corp, a Canadian dental AI, privacy, and cybersecurity training company. She helps dental practices understand technology risk in plain language and train their teams to recognize everyday privacy, cybersecurity, and AI-related risks.
Anne is a national speaker and educator on dental cybersecurity, privacy, and AI risk. Through Myla, she creates practical training for dental teams that supports safer workflows, better documentation, and more confident decision-making.
Learn More. Worry Less. Stay Safe.™
Train Your Team to Spot AI Risks Today