Some of the biggest online advertising networks this weekend served malware laden ads to some of the Internet’s highest trafficked websites.
Some of the most visited sites on the Internet began delivering malware laden ads this weekend. The sites affected included The New York Times, the BBC, MSN, and AOL. Those who visited a site delivering the ads are not at risk unless they clicked on an infected ad. After clicking, users are taken to another website which attempts to infect them with either Cryptowall ransomware or a trojan that gives the attackers control of the infected computer. The good news for FOSS Force readers is that the malware seems to only work against Windows, so GNU/Linux users are considered safe.
Although the sites delivering the ads are not at fault, this attack does point to a major weakness in the current method for delivering ads to websites. The attacks affected ad networks owned by Google, AppNexis, AOL, Rubicon and possibly others, which must shoulder at least some of the blame, as they are the gateway on which most advertising supported websites, both large and small, depend to ascertain that the ads displayed to their visitors are malware free.
According to a blog post on Monday by Trend Micro, tens of thousands of users have possibly been infected by the campaign, which takes advantage of vulnerabilities in Adobe Flash, Microsoft Silverlight and other software. The cracker/hackers behind the campaign were able to deliver ads by way of a once trusted ad serving domain name, which the owner had allowed to expire in January and which had been purchased just days before the current attack.
Another security company, SpiderLabs, detailed how they initially discovered the attack.
“If the code doesn’t find any of these programs, it continues with the flow and appends an iframe to the body of the html that leads to Angler EK landing page. Upon successful exploitation, Angler infects the poor victim with both the Bedep trojan and the TeslaCrypt ransomware – double the trouble.”
According to SpiderLabs report, several other once trusted but now expired media domain names have also been snapped up in the last few days and are “exhibiting the same characteristics as brentsmedia[.]com,” meaning that this is no time for ad delivering networks to rest on their laurels.
Jerome Segura with Malwarebytes reports that the attack was preceded by a smaller attack that began on Friday delivering a different malicious payload that could have been a test run for Sunday’s large scale attack.
This latest malvertising attack, as are most others, utilized a security problem built into the delivery system of most advertising networks. Although a network may vet ads hosted and served from its own servers, often the big networks deliver ads from other sources that never touch their servers and are therefore not vetted. This practice seems to introduce a weak link into the process and is an issue that needs addressing.
And the NYT was recently in the headlines as beginning to detect adblocking and give users two choices, a subscription (which doesn’t turn off the ads) or whitelisting the site, thus disabling the adblocker. Apparently, they were only rolling this out for a few users presently, probably testing the waters.
(It should be obvious, but google ‘”new york times” adblocker’, with the double quotes but not the single quotes, for many stories on it. As should also be obvious, googling doesn’t have to be on google, either. You can use your preferred googler, such as the more privacy respecting duckduckgo, if you prefer.)
Now they’re serving malvertizing.
And they wonder why so many are resorting to adblockers.
Here, I don’t really use an adblocker per se. But I see very few ads, because I use both noscript and requestpolicy as security enhancers because just because I choose to load a page off some site doesn’t mean I trust or want to load or run all the resources that page requests from often a dozen or more /other/ sites, many of them tracking sites of some sort or other, and some of them ad delivery sites that /might/ be OK on their own (or not), but as we see here, trust other sites in turn that are definitely NOT trustworthy.
So when I go to a new site or site I don’t visit often enough to have already setup its permissions regarding requests it is allowed to make to other sites and scripts from those sites it is allowed to run, I’ll often see a pretty jumbled page, because the CSS that would make it look normal is being served from some other site. Sometimes I just read the page that way, sometimes I figure out which site is serving the CSS (usually not too hard as it’s a related domain name) and allow it. Similarly with images.
Sometimes pages load most images from their own related sites and I’ve allowed them, but will load images specific to that story from elsewhere. This seems very common for desktop and distro review stories and with stories incorporating graphs, for instance, where many/all of the screen shots are served from somewhere apparently unrelated to the site I’m actually visiting, perhaps the author’s own domain, or some other domain where the review first appeared. That’s a bit more of a hassle since each such story will tend to have its own image sources. Fortunately, there’s a temporary allow option as well, but you still have to enable them for each story… or simply do without those images.
For many years adblockers worked much the same way and it was only the relatively security conscious and technically literate that were willing to go thru the trouble. Now, many adblockers use more directly curated blocklists supplied and updated by their creators, making them more accessible to the masses who now don’t have to constantly fool with settings to either keep the blocker working or keep it from blocking too much, but at the same time opening up the possibility of particular ad vendors sponsoring the adblocker to unblock their own ads, as is becoming much more common and is how Apple’s adblockers, etc, apparently work. So the masses are starting to adblock, and the viewer abusive and security nightmare ad industry, along with the content providers reliant on them, are having to adapt.
Meanwhile, the folks running user-permissions based browser software such as requestpolicy and noscript are as they always were, setting up permissions for sites which they can then visit without too much issue, and then not worrying about it too much.
But even this has its dangers. What if wp.com let its domain registration expire, for instance? Then the permissions I have set allowing fossforce.com to fetch from wp.com, would let it fetch whatever the new owner substituted. Can we say potentially exploitable security vuln? And noscript says I’m allowing scripting from so.wp.com as well, tho not wp.com itself, so they could run scripts.
It may not be as likely as blindly allowing some ad provider to blindly pull from yet another provider’s site, in a chain that could be who knows how many sites long, in part because the way the exploits usually work (as they did in the above story), they don’t host them directly on the first site, but rather, redirect to other sites in a chain that’s several sites long in ordered to avoid so easy detection, and that would normally be thwarted because I don’t in turn allow wp.com access to whatever malware sites, but it’s a possibility I need to take into account before setting a permanent permission allowing fossforce.com access to wp.com, as I have.
Meanwhile, right on this page, there’s requests to magickalwords.com and pintrest.com, which I’ve not specifically blocked but which remain blocked by the default-block policy I have set for requestpolicy, and lijit.com, facebook.com, and linkedin.com, all of which have global blockers set, with *.facebook.com and *.lijit.com blocked as tracker domains by requestpolicy itself, and linkedin.com just set as specifically globally blocked by user (me) for the same reason. I went to fossforce.com, not to facebook or lijit or linkedin, and there’s no reason they need to know I’m browsing fossforce by getting a request with a fossforce referral, so that request is blocked.
But my question is why is the fossforce page I loaded trying to load them in the first place? Facebook in particular is known to be an abusive tracker, with many sites sending requests to facebook so it knows when people are browsing those sites. Why is fossforce one of them?
(Note that some sites actually care enough about their user’s privacy, while still wanting to offer them the convenience of facebook links, to host site-internal facebook widgets that don’t actually make requests from facebook until the reader clicks them. Facebook thus doesn’t track users of those sites until they go to actually post to facebook, by clicking the site-internal facebook widget, because it isn’t until then that an actual request to facebook is made. Why isn’t fossforce respectful of reader’s privacy enough to do likewise?)
Finally, at least the flash and silverlight elements of the malware campaign wouldn’t work here, because they’re proprietary servantware with EULAs I cannot and will not agree to, so I don’t have them on my system at all. That alone has meant I haven’t been infectible by much of the malware over the years, even when it might affect linux users who do choose to run flash, etc. =:^)
I was served a ransomeware ad on a very prominent weather web site on Saturday. Using Linux I was able to poke at it a bit, but even still it got scary a couple of times, one event being about 2 dozen pop-up windows opening before I could get to xkill.
I don’t know what ‘protocol’ is with such events but I sent an email to the listed web master informing them of the ad, which was gone within a minute or two anyway.
I assume the site automatically detected the malware and blocked it, it showed up in one rotation of 3 ads that were cycling in a frame, and didn’t return after about 5-6 cycles.
Unlike some of this stuff, this ad was quite obviously malware, visually speaking… let’s just say coeds… spring break…
Come to think of it, maybe it was the imagery that the site detected and flagged.
So I have 2 questions out of all this. 1) is it appropriate to contact web masters or the like to tell them about foul links, ads etc, and 2) how do they detect stuff like this?
Do they have automated systems that follow links and look for malicious code incoming? Do they detect imagery? Does someone have to manually chase and confirm this kind of BS?
Well, that is just my point on Internet.
They have not made it to be secure and safe, they have made it for their shady business, and it looks like there is no need for change for that one.
And the comment on some expert is that they need some more avearens on that…
Yeah, he speaks in German.
Comments are closed.