The transparency of open software means that security vulnerabilities are visible and can’t be quietly swept under the rug.
Another bunch of scary security alerts from your favorite Linux distro has hit the front page of FOSS Force. It was the same last week and the week before, and will be the same next week and the week after.
One FOSS-boosting friend claims the alerts are the result of “media sensationalism.” While it’s possible that there is a clickbait element to some of the reports (DROWN, anyone?), most of the reported vulnerabilities are real and serious, and we need to know about them.
But what about vulnerabilities in proprietary software? Microsoft has regular Windows security updates, most of which a Windows PC downloads automatically without giving you much, if any, details about them or the bugs they ostensibly correct.
The same is true of many other proprietary programs. They may tell you that “updates are available,” but not always why they are needed, and never with as much granularity as is the norm in the FOSS world.
Another factor is that a GNU/Linux distro is a whole stack of programs, not just one. Windows and Mac, and even Android, don’t install or take responsibility for many applications, if any. On the other hand, practically any Linux distro will tell you about problems found with just about any program in its entire repository, which may contain thousands of applications and utilities from upstream providers.
So are major Linux distros going to report lots more bugs and security problems than their proprietary “competition?” Well, duh. Of course!
The Linux distro is also likely to tell you about bugs as soon as they are discovered instead of waiting for an arbitrary day like “Patch Humpday” or a press conference where they also announce some sort of positive news — “Now includes NSA-supplied encryption back door for added security!” — or some other new feature they’re proud of.
When it comes to bugs, hacks, and security breaches, FOSS is typically “no waiting” when it comes to telling users about program flaws.
So even though your GNU/Linux distribution probably has fewer holes and problems than most proprietary software, the fact that you learn about each and every one, down to the source code level if you want to go there, makes it seem worse than it is. The truth, of course, is there are fewer vulnerabilities in any GNU/Linux distro than in the typical Windows-plus-applications installation.
On that note, I see that my Windows PC downloaded a bunch of stuff last night and rebooted, even though I set it over and over to not install updates without my say-so, while my two Linux machines only update at my command — and hardly ever reboot even then.
Guess which pattern I prefer? That’s right! The GNU/Linux one. And so should you.
Robin “Roblimo” Miller is a freelance writer and former editor-in-chief at Open Source Technology Group, the company that owned SourceForge, freshmeat, Linux.com, NewsForge, ThinkGeek and Slashdot, and until recently served as a video editor at Slashdot. Now he’s mostly retired, but still works part-time as an editorial consultant for Grid Dynamics, and (obviously) writes for FOSS Force.
One more thing about web based updates.
They are web based, and they travel from one point to another and there are way too many middle mans!
one method of encryption is here, I just don’t know how to make it open source and secure>
http://www.scribd.com/doc/300013825/ENCRIPTION-DECRIPTION
It does not change the fact that some of the distributions recommended for people to try should come with strict warnings.
Why they are under resourced to keep up with security updates or get caught blocking updating parts effected by CVE.
Basically Linux distributions are not security equal.
So are major Linux distros going to report lots more bugs and security problems than their proprietary “competition?” Well, duh. Of course!
True but some of the smaller Linux Distributions appear not to be reporting many security fixes and it turns out that they are unable to keep up with updates.
People starting distributions need to think about will they have the resources to-do the maintenance or should they just join one of the existing.
Please note this issue of people creating new distributions without the resources to maintain it also apply to some groups making docker images.
So some Linux items are looking insecure when they are not and some are looking secure when they are insecure massively.
I’m back to Ubuntu because it’s huge, well-funded, and because of its popularity has “many eyes” on it. I respect the people who run small distros, especially if they’re for a special purpose and are closely tied to one of the well-maintained biggies. But I have no special needs, so I stick to the tried and true. Easier and safer. And yes, I know…. I have friends who are *BSD developers and advocates and tell me *BSD is more secure than Linux, etc. etc. etc. But I’m used to what I’m used to, and Ubuntu GNU/Linux fills most of my needs just fine.
Better to be constantly informed about holes and vulnerabilities, than to be kept in the dark and only be informed after the script kiddies and hackers have already gotten your SS#….your Credit Card Info….and the location of your last transaction! L “Cubed” Forever!!! (L-ong L-ive _L-inux)
“apply to some groups making docker images”
|
|
|
V
“apply to ALL groups making docker images”
Fixed that for you.
Now!
If you think about software it is stored on one web site and it needs to move from point A to point B.
Check sum like MD5 is not safe way … becase?
However, this is just providing more strength to open source and other stuff!
@OVVYYYXXXX
MD5 is broken and insecure. It should not be used to validate files subject to malicious tampering.
In fact: No hashing algorithm will fix that problem.
A strong hashing algorithm like SHA-256 is resistant to accidental corruption, but it’s just as insecure against malicious tampering because there’s no way to validate the hash value you are comparing against (typically listed somewhere on a website) hasn’t been altered.
Public key signing of files is MANDATORY to prevent malicious tampering. The key needs to be widely distributed through multiple channels too, not just posted on a website somewhere.
Once installed, software should reject all updates which do not match the known key.
This still leaves the problem of who has access to the private key…and what they may be compelled to do. Distributed key generation algorithms help somewhat with that. Especially if parts of the key are held by parties in widely separated geographical locations under different legal jurisdictions.
“…just as insecure against malicious tampering because there’s no way to validate the hash value you are comparing against (typically listed somewhere on a website) hasn’t been altered.”
This just happened, of course, with the MD5 for Linux Mint, which the hackers had altered to match their modified version of Mint with the malicious payload.
@Christine
Right.
Digitally signing binaries and educating users on a simple method for validating them should be the first priority for anyone distributing software of any kind.
Hashing algorithms like MD5 provide no protection of this sort. MD5 is even worse than most because it is weak enough that someone could have compromised the binary EVEN WITHOUT changing the hash value on the website. It should not be relied upon under any circumstances.
I will not say how it could be changed, well just another approach but it is possible…
No, hints available, due to security issues.
Buy the new release from a supplier. No more checksum paranoia.
You are just wrong!
@Ernest Mann
That is not a solution. Just because you got something from a commercial entity doesn’t make it safe. They can be compromised just as easily as anyone.
Checksums are not a security feature. They shouldn’t be treated as such.
For open source software the future of security rests with widely distributed public keys and private keys generated in a distributed manner and held by entities in widely separated locales (geographically and legally to prevent coerced use) and most importantly: repeatable builds which ensure a given binary came from a given source and that your build environment is not compromised. Repeatable builds are something kind of new in the linux world and no distros I know of are using them as of yet, although there is work going on in Debian to make it happen.
@Mike @Ernest Yup. The first virus I ever got was from some shrink wrapped software I bought from an Egghead Software store, back in ’95 when there were still bricks and mortar software stores.
The key is FOSS is about openness, the user has the right to know what is being updated and why it is being updated. The corollary is that patches should be pushed out as soon as they are verified which is typical of Linux distros
When you get update it travels from A to B.
I guess that yo could use some secure protocols, but how secure they are.
Just imagine the joke, man changes Windows 10 into some Ubuntu files, and people start installing Linux instead of Windows on their PC-s.
Nice one…
If some file could be changed then its MD5 could be tampered with as well.
Public key signing of files is MANDATORY to prevent malicious tampering. The key needs to be widely distributed through multiple channels too, not just posted on a website somewhere.
This is part myth.
Most Public key signed is depending on a checksum method. So breakable checksum is still breached security so signed or not.
MD5 is broken and insecure. It should not be used to validate files subject to malicious tampering.
I would not say it like that. Should not be used by self to validate files subject to malicious tampering.
Reason an attack that can cause a SHA1 hash collision may not cause MD5 and so on. So the more hashes you can use the better because it makes the attackers life harder each one you add. So MD5 is still useful as an item to make breaching the checksums harder.
There is no known checksum that is absolutely without the possibility of collision. But the harder you make it to generate a checksum collision the more likely that the collision will be worthless garbage to an attacker.
Debian MD5 SHA1 SHA256 and SHA512 that kinda does make attackers life hard. Debian also uses public key signing on top of that. This level of attention to detail is one of the first things you see disappear out of a lot of forks. Basically we want todo X overrides we want do security properly.
Public key Signing is an extra level on top of checksums. Most Public key signing systems are only configured to use 1 checksum type at a time instead of at least 2.
There is no magic bullet here.
@oiaohm
Not a myth at all.
A cryptographically secure hashing algorithm is necessary for secure signing, yes. That does not invalidate what I said. Public key signing is STILL THE ONLY way to prevent malicious tampering. It’s reliance on strong hashing is irrelevent. Incidentally, MD5 is not a cryptographically secure algorithm.
A hash-based checksum alone (without a digital signature) is NOT a security measure. You can use all the checksums you like and it won’t make a bit of difference. It doesn’t matter if you use one, two or ten hashing algorithms. Users still have to validate every one against a predetermined value. Where is that value stored? On a website usually. How do you know the value you are comparing to has not been tampered with, or modified in transit? Answer: You do not. The problem with hashes is there is no proof of authorship.
If you use several strong hashing functions for your binary download and put those hash values on your website, all I have to do is replace the download with a malicious file and replace the values listed on the website with my own hash values which match the malicious file. I don’t even have to compromise your site, if I can compromise a routing table or DNS. State level actors have no trouble doing such things. Public key signing stops that kind of attack. Hashing does not.
Thus, my original statement holds true:
CHECKSUMS ARE NOT A SECURITY FEATURE. THEY SHOULD NOT BE TREATED AS SUCH.
Additionally, MD5 is weak and shouldn’t be used at all.
In summary :
Checksums without digital signatures are useless against a real attacker. There are too many ways to interfere with the process of validation. Multiple checksums do nothing to help this.
Digital signatures are useful in proving authorship of a file. They are reliant upong strong cryptographically secure hashing algorithms to function. MD5 is certainly not one of those.