It’s been well documented since the start of COVID-induced distance learning that instructional delivery platforms are being raided by uninvited attendees. Barely a day goes by without another school reporting incidents of pornographic content being shared with students.
Commonly referred to as Zoom Bombing (to be fair, ‘bombing’ happens with Google Meets and other virtual instruction software as well), these unwanted intrusions pose challenges to schools to better safeguard students in distance and hybrid learning instructional models. Fortunately, in most cases teachers are able to kick the intruders out fairly quickly.
Yet the disruption itself has immediate effects. Namely, the obvious disruption to class and the resulting lost instructional time combined with a potential domino effect of events for the school district - explaining to parents, investigation of potential perpetrators (students?) and the resulting publicity as news of the incidents reaches the public.
Each of these incidents serve as reminders that schools have very little control over students sharing Zoom or Meet login credentials. ‘Raid my google meet class’ or ‘Join and show porn quick’ social media posts from bombers using hashtags like #ZoomRaid or #MeetCodes provide countless opportunities for uninvited attendees to enter classes.
Videoconferencing companies have developed added safety measures in an effort to further bolster security in K12 live synchronous instruction. To prepare already over-worked staff, some districts are opting for additional professional development to get instructional staff up-to-speed on practices to better safeguard virtual instruction.
But quality and effective PD takes time to develop and implement.
And it costs money.
And it takes time.
And who’s going to conduct the training? The IT team (with all their free time)?
As much as we’d like to think teachers should be experts in these applications, the fact of the matter is that unless they’ve been involved in distance learning previously, they’re not. Their primary focus is on locating, developing and implementing effective curriculum to give their students the highest quality educational experience possible.
Will teachers become Zoom or Meet security experts? Probably not.
What’s been occurring in schools regarding pornography bombing is analogous to leaving the
school doors wide open during the school day - and then scratching our heads in perplexed
disbelief when a complete stranger wanders into a classroom, sits down and promptly disrobes.
Yet even with safeguards in place and even if teachers became experts in Zoom or Meet, how long would it take for hackers to figure out how to bomb classes again?
Bombing isn’t just a training problem – it’s a filtering problem.
Maybe you’ve asked yourself:
‘Why can’t our content filter keep kids safe during synchronous live instruction?! The kids are online…and it’s a web filter, right? Why can’t it filter on Zoom or Meet?’
Answer:
Because filtering images and video during synchronous live instruction requires AI filtering at the browser level done in real time.*
*(And Deledao is the only company that filters images and video in real time at the browser level.)
Here’s the thing - originally designed for the static internet of the 1990s, legacy filters struggle to keep up with today’s dynamic web comprised of social media and user generated content.
And are helpless when it comes to filtering Zoom, Meet or any other videoconferencing software primarily because they’re reliant upon two obsolete methods of filtering:
1. Proxy level filtering
Unfortunately, the sheer amount of data that passes through your proxy means that it is not technically feasible for the filter to see details of what’s on the webpage.
Without the browser rendering the data first, the proxy-level filter only sees individual HTTP objects and cannot see the page as it’s displayed to the student. If your reporting shows only URLs, it’s because that’s all the filter can see at the proxy level.
Blur inappropriate pics while conducting a Zoom class? Nope.
2. Filtering using a database of domains and keywords
Most typical legacy filters rely on a database of pre-categorized domains and keywords to determine if content is blocked. That database may contain a list of 10 million, or even 100 million, domains/keywords and it may be continuously updated, but it simply can’t keep up with the volume of newly created content.
Mute and blur pornographic videos bombed during a Meet class? Not a chance.
But help is on the way!
Deledao’s browser-based AI technology represents an entirely new filtering architecture that’s designed to filter the dynamic internet of TODAY. And it does so in real time.
Check out how we filter a Zoom class that’s been bombed with an inappropriate video.
Example – Our proprietary AI technology blurs and mutes a video shared during a Zoom class
Static images are blurred in real time as well!
Example – Deledao filters an inappropriate picture shared with a Meet class
The internet happens in real time…your filter should as well.
Deledao does.
Curious? Wondering how Deledao can help keep your students safe during Zoom and Meet live instruction sessions?