Connect with us


Top Stories

BREAKING! Government To Ban Children Under Age 16 From Social Media

Published

on

Australia is introducing laws to ban children under the age of 16 from accessing social media.

The prime minister said on Thursday that the proposed laws, which will be tabled in parliament next week, are meant to reduce the harm that social media causes Australia’s children.

Advertisement

 

“Social media is doing harm to our kids and I’m calling time on it,” Anthony Albanese said at a press conference on Thursday, adding that he has spoken to “thousands” of parents and other adults on the subject.

Advertisement

“This one is for the mums and dads,” he added. “They, like me, are worried sick about the safety of our kids online. I want Australian families to know that the government has your back. I want parents to be able to say, ‘Sorry, mate, that’s against the law.’”

Advertisement

He said there will be no exemptions to the age limit even if children already have accounts or have permission from their parents to access social media.

Mr Albanese also said that it will be up to social media companies to enforce the age limit. The platforms will have to “demonstrate they are taking reasonable steps to prevent access” for young people.

Advertisement
READ ALSO:   Usifo Ataga's Murder: Inside The Neighbourhood Of Suspect, Chidinma Ojukwu

“The onus won’t be on parents or young people,” he said.

Advertisement

The law is expected to come into force 12 months after it is passed and will be subject to a review once in place. A national cabinet meeting of all premiers and chief ministers has been called on Friday to discuss the proposed legislation.

“This is world-leading legislation and we want to make sure we’ve got it right,” Mr Albanese said.

Advertisement

“We think there will be some, of course, exclusions and exemptions as well for this, to make sure that there aren’t unintended consequences – but we think this is absolutely the right thing.”

Advertisement

Communications minister Michelle Rowland said penalties will be imposed on social media platforms if they are found to be flouting the laws.

“The eSafety commissioner will have responsibility for enforcement and there needs to be enhanced penalties to ensure compliance,” she said.

Advertisement

Ms Rowland said platforms that will be impacted by the legislation include Meta‘s Instagram and Facebook, Bytedance’s TikTok, and Elon Musk’s X. Alphabet’s YouTube will also likely fall within the scope of the legislation.

READ ALSO:   House Republicans launch inquiry into Hunter Biden plea deal

Social media platforms already have an age limit of 13 in place, but it is not easy to enforce.

Advertisement

Australia’s eSafety commissioner has recommended a “double-blind tokenised approach” to ensure enforcement: information will be provided to a third party that will verify the user’s age to social media platforms without revealing other details, reported The Sydney Morning Herald.

Details of the plan are still being worked on via a trial of potential age-verification technologies.

Advertisement

Opposition communications spokesman David Coleman said social media use by children is “one of the defining issues of our era”.

“We want to make sure that the legislation is strong and that there aren’t loopholes,” he said.

Advertisement

“We don’t think that TikTok can be made safe for children, we do not think that Snapchat can ever be made safe for children, and we don’t think that Instagram can be safe for children.

READ ALSO:   Breaking: Nigeria Becomes Number 1 Country With Worst Access To Electricity In The World

“These platforms are inherently unsafe for younger children, and the idea that they can be made safe is absurd. The government shouldn’t be negotiating with the platforms.”

Advertisement

Meta has said it will comply with the legislation but expressed concern about the age verification technology.

“The idea that somehow you can sort of force the industry to be in a technological place that it isn’t, is probably a bit misunderstood in terms of where the industry is,” Meta’s global head of safety, Antigone Davis, said on Thursday, according to The Guardian.

Advertisement

“The current state of age assurance technology… requires a level of personally identified information to be shared,” she said. “It’s usually in the form of an ID or document ID, documentation or biometric type data, facial feature data for young people, and if it’s parental consent, the data that will be involved there to verify the parent is just another additional layer of data to establish.”

Advertisement
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *







Also Read...