Category: News

  • New findings from Finland

    Educators are highly active in digital environments β€” most use digital tools several times a day in workshops, mentoring, group sessions and learning programs. Awareness of online risks is strong, especially around privacy, misinformation, cyberbullying, phishing, and AI-related threats like deepfakes. However, when it comes to AI, experience varies:πŸ‘‰ Some already use AI for admin…

  • Takeaways from the Third Meeting

    β€”

    by

    in

    We successfully held our third partner meeting, where we reviewed progress, strengthened our methodology, and finalized key next steps for the AIwareness project. Partners evaluated the toolkit methodology, agreed on testing workshops in each country, and aligned on a stronger dissemination strategy. We also confirmed the upcoming international training course (13–17 April) and planning for…

  • What’s missing in current Erasmus+ & EU online safety materials?

    Our analysis reveals several important gaps that still need attention: πŸ”Ή Hands-on AI recognition for youthVery few resources help young people practically understand how generative AI works or how to spot deepfakes in interactive, age-appropriate workshops. πŸ”Ή Phishing simulations for youth workWhile phishing simulators exist for schools and workplaces, the youth sector lacks ready-made, low-cost…

  • What did we find about online safety resources in Europe?

    Our review highlights three key insights: Strong foundations exist – Several well-tested Erasmus+ toolkits already support online safety, phishing prevention, and digital security through interactive and peer-based approaches (e.g. SALTO and KA2 projects). EU-level platforms matter – Hubs like Better Internet for Kids, Safer Internet Centres, and the European Youth Portal provide reliable, up-to-date materials…

  • Ethical use of AI

    β€”

    by

    in

    What are we teaching young people about AI? Findings from the AIwareness project show a clear imbalance in how AI-related topics are addressed by educators and youth workers. While data privacy is the most commonly covered topic (58.3%), and recognising misinformation is discussed with nearly half of young people (48.3%), the ethical use of AI…

  • Risks and needs

    β€”

    by

    in

    AIβ€” especially generative AI β€” amplifies social engineering threats by making scams, phishing, and deepfakes more convincing. Deepfakes and AI-generated media erode public trust and information integrity, posing serious risks to privacy, identity, and societal cohesion. Detecting deepfakes and AI-based fraud remains a technical and forensic challenge β€” meaning that raising awareness, digital literacy, and…

  • AIwareness Research Report

    β€”

    by

    in

    This report provides an in-depth overview of the current state of awareness and safety in the online environment, with a particular focus on young people and the professionals who support them.

  • Social Engineering

    β€”

    by

    in

    Social engineering is a psychological manipulation technique used by attackers to trick people into revealing sensitive information or performing actions that compromise security. Instead of hacking systems, social engineers target human vulnerabilities. Common Tactics How to Protect Yourself Social engineering preys on trust and curiosity. Stay vigilant to outsmart attackers!

  • AIwearness

    β€”

    by

    in

    πŸŽ‰ Exciting News πŸŽ‰ We are thrilled to announce our first project, AIwearness 🌍✨ This is a monumental milestone for us, as it marks the beginning of an incredible journey toward building a safer community. With this grant, we aim to connect diverse communities, empower individuals through education on safety and AI. Our mission is…