Mediaforta AI policy
Table of contents
- Introduction
- Vision
- What do we mean by AI?
- Objectives of this policy
- How do we use AI at Mediaforta?
- Where do we not use AI?
- AI and ethics
- Permitted tools and governance
- Roles and responsibilities
- Monitoring and adjustment
- Training
- Final statement
Introduction
AI is conquering the world at an unprecedented speed. It’s a revolution that offers both companies and consumers a wealth of new opportunities.
Mediaforta fully embraces AI – not blindly, but with a clear vision and sense of responsibility. Because, while AI brings many benefits, it also presents significant challenges. Think of issues like language models copying content without the explicit consent of content creators, the water and energy consumption being many times higher than a simple search engine query, or the risk of incorrect and fake information.
That’s why Mediaforta finds it important to implement a clear vision and policy regarding AI.
Our vision
“We like computers, but we love people.”
— Tom Peeters, CEO Mediaforta
This statement captures our entire AI vision.
Mediaforta is a human and AI-driven content marketing agency
We love AI because it’s a revolution that strengthens our way of working: a tool that speeds up processes, provides insights, and supports our creativity. We believe this allows us to evolve faster as a company and bring more value to our clients. AI is therefore part of our workflows.
Human input makes the ultimate difference
Regardless of what AI generates, human input will always add extra value. For example, we believe a newspaper written by people (whether or not supported by AI) will always be more appreciated than one written entirely by AI without any human involvement. Do we prefer conversations with an AI bot, or would we rather talk to a real person? Do we value a drawing made by a person more than one made by an AI? We believe in a good balance between human and machine.
A critical perspective
Although Mediaforta actively uses AI in its daily operations, we also take a critical stance towards AI. What about language models trained on content without the creators’ consent? We simply call this theft. While major AI providers often emphasize ethics, they do not answer our question about why they don’t compensate content creators for this use. (We’ve asked Microsoft, Google, and others.) Mediaforta will always advocate for fair compensation for these content creators. After all – they pay for water, electricity, and heating, so why not for the content used to train their models?
Speaking of water and energy: AI models consume massive amounts of both. Mediaforta values sustainability, making this an important focus within our AI policy.
AI models generate huge amounts of low-value, misleading content. People may be unintentionally misled by errors and untruths this content contains. At Mediaforta, we find it essential to indicate how content was created, and to what extent AI was involved. In short: we advocate for transparent AI use – both in content creation and in the other services we provide to clients.
Our policy
1. What do we mean by AI?
In this document, AI tools or technologies refer to any digital system that automatically learns from data and subsequently generates output or performs tasks.
We distinguish between public tools (freely accessible online, open-source) and closed tools.
2. Objectives of this policy
With this policy, we aim to:
- Structure and streamline AI use
- Clarify what is and isn’t allowed
- Guarantee transparency and ethics towards employees, clients, and partners
- Contribute to sustainable business operations and corporate social responsibility
3. How do we use AI at Mediaforta?
AI plays a supporting role across various domains within our agency.
We’re open about our use of AI, both internally and towards clients. If AI plays a role in creating content, we’ll mention this when appropriate. We remain responsible for the content we deliver.
Applications include:
- Strategy: AI helps analyze data, identify patterns, and spot new opportunities.
- Optimizations: We improve content based on performance, SEO criteria, and user behavior.
- Brainstorming: AI inspires us during creative processes like trend analysis and idea generation.
- Adaptations: Content is personalized, translated, and adapted faster for various target groups and channels.
- Selection Processes: From transcribing job interviews to automating sorting and selection based on candidate information.
- Client Follow-up: Status meetings are automatically transcribed, summarized, and documented, improving speed and proactivity.
- Research & Analysis: Faster identification of relevant sources, trends, keyword analysis, and performance interpretation.
- Automation: Repetitive tasks like A/B testing or content variations can be executed through AI routines.
- Problem-solving: AI assists with technical challenges, scripts, and optimizations needed to enhance our tools.
Conditions:
- AI is only used as a supporting tool.
- All output is reviewed and approved by experts.
- AI must never undermine the quality or authenticity of our work.
4. Where do we not use AI?
We highly value our own authenticity and the power of human thinking, with deep respect for ethics and transparency. Therefore, we do not use AI to:
- Fully generate content that is shared without human review.
- Process confidential information or personal data in public AI tools.
- Make decisions regarding personnel, finances, or legal matters.
- Use tools that do not meet basic requirements for privacy, copyright, and security.
5. AI and ethics
Although AI offers many benefits, we choose a critical, ethical, and sustainable approach:
- We apply a strict Content Risk Management process, where every AI use is reviewed and validated by a human expert within our organization.
- We prefer closed AI systems and European tools, in line with the European AI Act.
- We avoid tools that do not meet our security and privacy standards.
- We strive for transparency: internally and externally we communicate where and how AI is used.To provide this clarity about the role of AI in our content creation process, we developed an AI Influence Scale.
The image below illustrates the balance between human and AI input.

Here’s a breakdown of the scale:
- 100% Human: This content is entirely human-made, with little to no involvement of AI.
- 75% Human & 25% AI: This content is created by humans with support from AI—for example, AI was used to spark ideas or draft an outline.
- 50% Human & 50% AI: This content represents an equal blend of AI output and human editing. AI may have generated the base structure or large text sections, but human input was crucial in shaping the final version.
- 25% Human & 75% AI: This content is largely AI-generated but has been reviewed and modified by humans where needed.
- 100% AI: This content is fully AI-generated, with minimal to no human involvement beyond prompting. The AI’s output is published with little or no further human verification.
When content is translated by AI, we shift the percentage 10% towards AI. For example, if an article was originally 0% AI in its original language but then translated using AI, the translated version would be marked as 10% AI. If the original was 25% AI, the translated version would be 35%, and so on.
- We monitor the environmental impact of AI (energy and water consumption) and choose efficient, conscious technology wherever possible.
- We remain vigilant about copyright ethics, crediting correct sources when AI assists us.
- We actively safeguard our own learning and thinking abilities, ensuring AI keeps us sharp, not dependent.
- Inclusivity: we avoid bias, stereotyping, or false assumptions that AI can inherit from its training data.
6. Permitted tools and governance
- We only use AI tools that meet basic conditions around data storage, privacy, transparency, and reliability.
- We maintain an internal list of approved AI tools (reviewed annually).
- New tools are discussed within the team before adoption.
7. Roles and responsibilities
- Every employee is responsible for the correct and conscious use of AI.
- Team leads or project managers check that AI-assisted content meets quality standards.
- The CEO supervises the enforcement of this AI policy and acts as a point of contact for questions or incidents.
- We collaborate with a network of freelance content specialists and other freelancers, each using their own tools and systems. Since we have no direct control here, we expect them to comply with our AI policy as outlined in this document. This means:
- Being transparent about AI use when delivering content
- Not using AI for final products without human intervention
- Using only validated AI tools
- Never entering confidential client data into AI tools
The final responsibility for quality control lies with our internal content managers.
8. Training
AI evolves rapidly. As an agency, we find it important to keep our employees, freelancers, and clients informed and trained on how to use AI for better results.
9. Monitoring and adjustment
This policy is evaluated and updated semi-annually, based on:
- New technological developments
- Feedback from the team or clients
- Changes in regulations and sustainability requirements
It’s a living document: input is welcome from everyone within the organization.
10. Final statement
Our mission is to help brands grow through highly specialised content creation, delivered by the world’s most expert content creators and marketers. AI helps us empower this mission – not by taking over our work, but by making our work smarter and stronger. With the right balance between human and machine, we continue to lead in a rapidly evolving industry.
AI offers opportunities – but also responsibilities. This document helps us approach AI professionally, consciously, and ethically – both internally and externally.
In doing so, we aim not only to strengthen ourselves but also to contribute to a sustainable and trustworthy sector.
For questions, doubts, or suggestions: contact a responsible person within Mediaforta.