top of page

gdpr.com.tr

Data Privacy Consultancy

Explanatory Note on Chatbots (ChatGPT Example) under Turkey’s KVKK

  • Writer: A. F. Hanyaloglu
    A. F. Hanyaloglu
  • Oct 6
  • 4 min read

Executive Summary


The Personal Data Protection Authority has published an explanatory note addressing chatbot applications—particularly AI-powered systems such as ChatGPT—that simulate human conversation and process large volumes of personal data. The note clarifies how such technologies operate, the categories of personal data they process, and the key data protection risks involved. It highlights the importance of transparency, lawful processing, and user awareness when deploying or interacting with chatbots. Developers and organisations are advised to apply privacy-by-design principles, conduct risk assessments, and ensure compliance with the Law to minimise data security and privacy risks, particularly for children and vulnerable users.


At a Glance


  • Chatbots are software tools designed to simulate human interaction using natural language processing (NLP).

  • AI chatbots continuously learn from users’ inputs, enhancing their performance but also expanding privacy risks.

  • They process extensive personal data, including account details, messages, IP addresses, cookies, and uploaded files.

  • Transparency and user awareness are crucial for ensuring lawful data processing.

  • Developers must implement data protection measures such as privacy-by-design, lawful processing grounds, and age verification.

  • The Authority emphasises proactive protection of children and adoption of international privacy and security standards.


Context & Background


In June 2025, the Personal Data Protection Authority published an explanatory note titled “Chatbots (ChatGPT Example)” to raise public and corporate awareness about the data protection implications of conversational artificial intelligence systems.


These systems—exemplified by ChatGPT, Siri, Alexa, and Gemini—have become part of daily life for both individuals and organisations. They assist with tasks ranging from customer support and information retrieval to coding and translation. However, their growing use raises questions about transparency, lawful data processing, and the security of personal data shared during interactions.


The Authority aims to provide a balanced view: recognising the operational benefits of chatbot technologies while drawing attention to the obligations of developers, service providers, and users under the Personal Data Protection Law No. 6698. The note identifies potential privacy risks, especially those arising from excessive data sharing, insufficient user awareness, and the lack of safeguards for minors.


Key Points Explained


1. What are Chatbots and AI Chatbots?


Chatbots are software programs that simulate human conversation through written or spoken language. Using natural language processing (NLP), they interpret user input, determine intent, and respond accordingly.


AI-powered chatbots, such as ChatGPT, go a step further by employing machine learning to understand context, tone, and emotion. They learn continuously from each interaction, allowing them to deliver more personalised and relevant responses—but also leading to the accumulation of personal data over time.


2. Typical Functions and Use Cases


AI chatbots combine NLP with natural language understanding (NLU) and natural language generation (NLG) to interpret user intent and generate coherent responses. Their applications include:


  • Customer support and help desks

  • Answering questions and assisting research

  • Programming and content creation

  • Translation and summarisation

  • Sentiment and emotion analysis


By automating these tasks, AI chatbots reduce the need for human intervention, offering time and cost efficiencies. However, their broad functionality often requires access to diverse data sources, increasing the risk of overcollection or misuse of personal data.


3. Types of Personal Data Processed


AI chatbots rely on vast and varied datasets to function effectively. Depending on their purpose and configuration, they may process:


  • Account information: names, contact details, credentials, and payment data.

  • Input content: text or files uploaded during interactions.

  • Communication data: message content and related metadata.

  • Social media data: information shared via connected platforms.

  • Technical data: IP addresses, browser details, device type, access times.

  • Cookies and behavioural data: tracking usage for analytics or improvement.

  • Voice and speech data: where voice input or transcription features are used.


Such processing is often necessary to deliver functionality, but it must always align with the Law’s principles of purpose limitation, proportionality, and data minimisation.


4. Data Protection and Security Considerations


The Authority underscores the need for transparency and user awareness. Chatbot providers must clearly inform users about what personal data is collected, for what purpose, how long it is stored, with whom it is shared, and what rights data subjects have.

Common risks include:


  • Users oversharing sensitive or private information.

  • Exploitation of technical vulnerabilities leading to data breaches.

  • Insufficient safeguards for minors and failure to verify users’ ages.


To mitigate these risks, organisations must ensure robust technical and organisational measures—encryption, secure transmission channels, access control, and continuous monitoring—are in place.


5. Key Compliance Recommendations for Developers


When designing or operating chatbot applications, developers and data controllers should:


  • Conduct risk assessments before any processing begins.

  • Comply with the accountability principle and document processing activities.

  • Ensure lawful processing in accordance with Articles 5 and 6 of the Law.

  • Fulfil the information obligation under Article 10 during data collection.

  • Adopt privacy-by-design and by-default approaches at every development stage.

  • Apply recognised international standards and seek certification where applicable.

  • Implement secure data transmission for text, voice, or image inputs.

  • Observe the Authority’s recommendations for AI developers and service providers.

  • Ensure accurate and reliable age verification mechanisms for child users.

  • Take proactive steps to prevent children from encountering harmful or inappropriate experiences.


These measures aim to strike a balance between technological innovation and personal data protection, ensuring chatbot technologies evolve responsibly within the Turkish regulatory framework.


Why It Matters for Businesses


As chatbot adoption accelerates across industries—from retail to banking and customer service—organisations using or developing these tools must recognise that they are not exempt from data protection obligations.

The explanatory note makes it clear that processing personal data through AI chatbots triggers all responsibilities under the Law, including lawful basis, transparency, and security requirements. Businesses integrating such tools must therefore review their privacy policies, vendor agreements, and data flows to ensure full compliance.

For developers and AI solution providers, the Authority’s guidance signals an expectation of privacy engineering discipline: embedding data protection considerations from the earliest design stage. For users and corporate clients, it reinforces the importance of informed and cautious interaction with AI-driven systems.

Ultimately, the Authority’s message is one of balance—embrace innovation, but do so with respect for individual rights, accountability, and transparency.


Source: Based on the explanatory note “Sohbet Robotları (ChatGPT Örneği) Hakkında Bilgi Notu” published by the Personal Data Protection Authority (June 2025).

Comments


bottom of page