E-smart: Getting your Digital Licence

Creating Futures Together

What is generative AI?

Artificial intelligence (AI) is the development of computer systems that can perform tasks that traditionally needed human brainpower. Generative AI ‘learns’ from the instructions it receives and from analysing large amounts of existing content. Then it produces new material in response to prompts.

People use generative AI to create all sorts of things: essays, cartoons, videos, emails, song lyrics, voices, artworks, even realistic ‘conversations’ with chatbots.

 

Deepfakes and AI-generated image-based abuse

Despite the many positives of generative AI, it raises risks for the community. There have been some distressing reports of children using generative AI technologies to create degrading images of their peers.

As generative AI spreads and becomes more sophisticated, there are risks that it may be used to harm children in various ways.

Deepfakes are one such risk – a rapidly rising form of generative AI. Deepfakes are when people create realistic looking but entirely fake photo or videos using AI. This content usually depicts a person saying or doing things that they haven’t done.

How generative AI and deepfakes can be used to inflict image-based abuse:

  • Creating fake videos and photos, offensive memes, or 'deepfake' pornography.
  • 'Sextortion', where a perpetrator threatens to release AI-generated 'nudes' unless the victim hands over money or real intimate photos.

Unfortunately, there’s nothing new about online abuse. But there is a risk that generative AI could make it easier for people to target others in ways that are fast, realistic, and damaging. This intensifies the risk of children and young people being seriously harmed.

 

What can we do to prevent image-based abuse via generative AI?

Firstly, we can build our own skills and knowledge. If we know the tech, our children are more likely to come to us with a problem. For example, we can:

  • Try using generative AI ourselves, to see what it's like. Visit the eSafety Guide for information about different platforms and any safety issues.
  • Know the safety mechanisms for all the digital platforms your family uses. Visit our Telstra DigiTalk Hub which provides trustworthy, practical resources to help families confidently navigate children’s technology use.
  • Secure all accounts using complex passwords or passphrases and multi-factor authentication.

Secondly, we can keep talking with our children about the risks and benefits of the digital world. For example, we can:

  • Discuss how important it is to stay in control of our personal information. For example, we can set our accounts to ‘private’, delete old accounts, block or mute people, and only accept requests from people we know well and trust. We can also think seriously about how much we want to share online at all. Even innocent pictures and videos might get misused.
  • Remind them that not everything online is accurate and that they should talk to an adult if they see something upsetting.
  • Remind them that it’s always OK to say ‘no’ and we should always respect another person’s ‘no’. Just because we like or trust someone does not mean we have to share pictures or videos with them.
  • Explain that it is never OK to share an intimate or humiliating image or video without someone’s consent, even if it is ‘fake’.
  • Encourage empathy by asking them, ‘How would you feel if someone did that to you or to someone you love?’
  • Remind them that they can talk to you, no matter how worried or embarrassed they are, and that we will always help them to resolve a problem.

What to do in instances of image-based abuse

We can think in advance about what we would do if something went wrong. For example, if our children were the target of image-based abuse, we could:

Report to the authorities

  • Report the content to eSafety or directly to the platform where it happened (see the eSafety Guide for tips).
  • Report online child sexual abuse to the ACCCE or police.
  • Report online scams to Scamwatch
  • Report online offences against adults to CyberReport.

Check out the Take It Down site

Check out the Take It Down site, which works with industry to get sexual images of children under 18 taken down from different sites.

 

Tighten security

Tighten security – e.g. block, mute or hide the person responsible and update your privacy settings, using the eSafety Guide.

If there has been blackmail or sextortion, cut contact with the person responsible and do not send them money, images or videos.

 

Keep evidence of what happened

Note the times, dates, websites, platforms and people involved. Take screenshots or photos of any online chats. (Do not take photos or screenshots of any intimate images of children under 18.)

 

It’s important to make sure your children understand that you recognise that this is painful for them, but there are things we can do to help resolve it. And ensure they understand that they are not to blame, even if they shared some images voluntarily to start with.

 

Hope for the future

While there are many worrying things about generative AI, it’s not all bad. When generative AI is designed and used in positive ways, it can help to prevent online abuse. For example, some people are excited about the potential of generative AI to detect, flag and remove content on digital platforms which is abusive or threatening.

To learn more about how to approach tech conversations with your children and reduce their risk of exposure to online harm, visit our DigiTalk Hub.