This was my second ShmooCon. It was just as fun as the first time, and this
time I did various other activities (solving a few of Raytheon’s CTFs,
practicing lockpicking, looking at all the codes in the conference program)
in between interesting talks. For more details about the conference itself,
see the last year’s report.
Videos of the talks have been posted.
I have simplified some of the talks’ titles.
Three talks I find to be the most interesting:
GDPR is probably going to have a massive effect this year on the way many companies’
processes in security and incident response will look like. This is an introduction
to something many people will want to deep dive into.
The Patching firetalk, patching is something that is a low priority in
too many places and perhaps the specific causes and steps in this talk
can help convincing people that it matters.
Robot Attacks, a really fun talk that is also practically applicable to places that have to take OPSEC seriously.
The badge was interesting (and more practical) this year: a battery-powered
WiFi signal strength meter. It also highlighted the difficulty of shipping
hardware: one of the boxes containing badge parts went missing, and the
firmware from the factory had LEDs set to blindingly bright, so all badges
had to be reflashed just before the conference.
The signature opening rant by Bruce Potter had the overall theme of
responsibility. Responsibility for systems: instead of systems that keep
people safe, we make systems that are fragile and full of sharp edges for
users to cut themselves on. Responsibility for features that are dual-use:
inadvertently enabled geolocation can be bad for opsec;
but it can be super helpful in emergency or to help distinguish between
facts and fiction (such as US-related political tweets being posted from Russia).
Responsibility for critical infrastructure, such as Twitter–it was
a lot more useful than the telephone system in Puerto Rico efforts, for example.
Supporting critical software, such as OpenSSL. Potential legal responsibility
for bad software–software liability.
Don’t Ignore GDPR; It Matters Now!—Thomas Fischer
will come into effect this year, and its impact will be felt worldwide as EU
trade agreements are beginning to come with GDPR requirements / harmonization
strings attached. The talk covered some of GDPR’s provisions and their
Organizations, especially those that did not pay too much attention to data
protection before, will have to change a lot of their processes. For example,
how will their incident response process handle the requirement to notify
about breaches (to customers and regulators) 72 hours after discovery? The
definition of breach might need changing, since covered events are “alteration”
and “unauthorized disclosure”, not just “destruction” or “loss”.
How are the companies going to track all pieces of covered “personal data”, which is not just names but also indirect values like IP addresses, IMEI, GPS
coordinates, social network names, and so on.
Finally, the new rights accorded to customers will also require a notification
and response process. Among them: right to be informed; to be forgotten; right
to access with a response deadline of 40 days; data portability; and right to object.
Testing EDR Capabilities—Casey Smith
Casey presented a framework for testing and evaluating endpoint protection
tools (EDR stands for “Endpoint Detection & Response”). Based on MITRE Att&ck framework, the Atomic Red Team is a collection
of self contained, small tests for known malicious behaviors. There are about
100 tests included. For example, a Windows
AllTheThings.dll has 7 different
whitelisting/app control bypass techniques in a single file.
The design goal of the framework is to have a small footprint and to be able
to validate vendors’ assertions. The framework should show whether the vendor’s offering
detects, prevents, or is even able to observe a particular type of activity.
I didn’t catch if this is future work or only a passing remark: EDR tools are
complex and have high privileges on the system, like antivirus software. What happens when an attacker starts messing with the EDR tool, similar to long-standing research into vulnerabilities in anti-virus software?
Keynote—Donna F. Dodson
There were three parts to the keynote. The first part covered history and
methods of NIST; the second part was the current and desired state of cybersecurity;
and the final part were the asks for the attendees of ShmooCon.
Donna opened with a history of NIST and its unique approach angle: the
separation between Department of Defense (NSA) interests and protecting
sensitive, but not classified information (NIST). Most governments
historically invested into computer security from the national defense angle.
NIST produced advice on building trusted computing systems (the “Rainbow”
book series) and risk management. It brought awareness of computer security
(20 years ago most people didn’t know how to approach it) and collected diverse input: often, users or civil liberties didn’t have a voice at the table at all when security was discussed. At NIST, mathematicians, CS and engineers were augmented by social scientists and attorneys, and formal backgrounds were
complemented with people who have “experimental” education. NIST has had
its own issues with transparency (Clipper, etc.) but Donna thinks these
experiences informed change and built some trust in the community.
Most of today’s students are mostly interested in the technology side.
Donna went to “shmooze a student” and only one student was interested in
the policy side. So how do we bring technology and business, or tech
and government, together? Perhaps by adding security and quality content
to standard CS classes, something that’s generally lacking today? (Max: also
ethics and policy. Usually the only place a CS student encounters computing ethics would be in one of the “intro to cybersecurity” lectures).
We need to plan today for what’s coming tomorrow, for example quantum crypto
and modifying the protocols / key infrastructure to accommodate for the new
protocols. Incentives to get things right, something Bruce mentioned earlier
in the opening remarks. How do we incentivize the marketplace to value
quality? It works for some markets (cars) where security/safety is a market differentiator. Example of whether someone would find a car that needs to
be turned off and on again to work properly acceptable. “No” (Max: I think people tend to tolerate it. Especially with “smart” car tech, sometimes things
just don’t work until a car is turned off and on again. I’ve seen plenty of
flaky tech on both low and mid range cars.)
Asks for attendees:
- everyone here spends a weekend because they’re passionate about security. Keep passionate, and take a different perspective on things (not just that of a security professional).
- from a policy perspective, bring those views to the table, but in a constructive manner and from a consumer perspective.
Firetalk: Creating an Insider Threat Program—Tess Schrodinger
Tess gave a nice, Dungeons and Dragons-themed presentation about designing an
insider threat program in “the Kingdom of Insecuria”. The slides are
already posted and have great D&D art!
Besides from the procedural steps (components of the program), the presentation highlights and
starts with the need for leadership buy-in and early identification of key stakeholders that
can be important allies in convincing the leadership to act.
Firetalk: Your Defense is Flawed—Bryson Bort
Bryson advances the argument of looking for the next step after exploitation, not
at the exploitation attempts themselves, when defending against malicious actions.
His reason is that for most systems, given infinite time, exploitation is certain
(there are many programming flaws in complex systems, and complexity is very easy
to achieve) but there is a limited number of post-exploitation
steps (data exfiltration, communication… payloads) most attackers are interested in.
A counter-argument is given by Bryson himself when talking about a great many ways
to communicate information and bypass air gaps, with new methods regularly discovered,
such as Mordechai Guri’s work.
Firetalk: Let’s Kill all the CISOs—Alexander Romero, Steve Luczynski
“Do we even need CISOs at all”–asks Steve, who is currently a CISO. “No kid wants to be that, so why do we have them?”
Depending on the company, technical prowess is not actually required for the role. Some
companies have a risk officer already; can the risk officer take over CISO’s position?
Is it easier to teach business to a technical person or technical stuff to a
Firetalk: Patching–It’s Complicated—Cheryl Biswas
Cheryl talked about the challenges of actually doing one of the most basic threat
mitigation methods: keeping the systems up to date.
There are a lot of roadblocks to patching. Among them: lack of accountability,
especially in enterprise culture. No one takes ownership of issues. Patching can be
technically difficult to do and benefits from a process for managing releases (if there
is no process, this becomes harder).
Damned if you do it, damned if you don’t. “What if patching breaks something”? compared
to “How many ransomware incidents you’ve had to recover from?” Is the cure worse than the disase?
Sometimes this results in a decision not to patch: “we’re not patching it
because we’ll break it. We’ll run it until failure and replace it.” In those
cases, a mitigation has to be in place.
The Background Noise of the Internet—Andrew Morris
Andrew talked about his setup to capture and analyze “background traffic”–things
that people just hit every (or random) IP address with, and some of the findings.
The API is published on GitHub
and someone (didn’t catch who) built an Angular frontend at viz.greynoise.io.
For the setup,
libcloud is used because it is multiplatform and is an API,
unlike Terraform. There are various differences across cloud providers that need
to be worked around: NAT (or absence of), network adapter name differences, usage
cloud-init. No other setup should take place until
cloud-init finishes running.
Cloud-provider-supplied health checks are not used, and
iptables rules are set up
to be “ridiculously aggressive” - everything inbound is logged, outbound is ignored.
Nodes have some standard services on a lower than honeypot level (ssh, telnet, http, etc.)
For analytics: don’t need real time analytics, which simplifies things a lot.
Don’t need to pay attention to activity that happens on only one node. IPs are
enriched with stuff like ASN, rDNS, org, country… ARIN has a public API for this.
For time series analysis, need to know how many nodes are currently collecting the data.
Software-wise, Postgres is nice, but too low for this; Andrew uses Postgres + Cassandra.
Google PubSub is also nice but charges per message. Watch out for that bill.
The best results, in order, came from iptables, http, telnet, ssh, and p0f. There are a lot of popped routers / residential IPs
attacking people (500k+). Some AS / organizations show up a lot: Brazil’s
“Telefonica brasil SA” - 300k hits in the database, 6x as much as the next one. Cloud
providers are similar to each other, roughly the same number of hits for AWS and Azure.
There will be spoofed traffic. Sometimes, “sampling” nodes get a cloud IP
that is “dangling”, meaning it still resolves via DNS to something else.
This traffic is not representative of “background noise”, we don’t want it in our dataset.
Worm finder application: when a machine has the same port open as it’s scanning for. Not
interesting for 80, but weird ports are an indicator. Zmap use is super easy to
fingerprint by design.
A project like this also needs opsec, since bad guys will be looking at the results too.
- hard to fingerprint
- encrypt all the things
- shift infrastructure constantly
- “obscurity through obscurity is absolutely real”
- no names anywhere on collection boxes, no useful domains anywhere on collection boxes
- dockerize (Max: not sure what this helps with, OPSEC-wise)
- reduce the oracle surface (from ingest to API/UI correlation)
- minimum number / node thresholds to show any results
- delays in the pipeline
- Compare the value of this vs. provided by SANS ISC? A: don’t know, probably a bunch of the same thing.
- IPv6? A: no.
- How far to go down the road of emulating protocols? A: ja3 fingerprint is recorded for SSL.
Fingerprinting SSL with JA3—John Althouse & Jeff Atkinson
JA3 is a tool to fingerprint a SSL connection by looking at the Hello handshake. This is useful because the fingerprint is affected by tools attackers
use, and tools are relatively difficult for attackers to change.
A SSL Hello handshake has a lot of information. To summarize
it, we need to come up with a fingerprint that’s easy to create, share, and use
but also unique to the client. The fingerprint is a MD5 hash of a string
that’s derived from inteesting Hello attributes. This is not a silver bullet;
collisions can happen, a lot (when using OS level network primitives). There’s
a list of hashes at ja3/tree/master/lists.
By looking at hashes and domain combinations, interesting clients pop up.
Some cat-and-mouse games: initially, Metasploit SSL was trivial to detect by
looking at lower char followed by 2 upper char in city name (randomly generated).
The next iteration changed that but used a cert very similar to Ubuntu’s
Snakeoil cert, specific attributes still made it easy. Eventually, the code is
changing way too often. The interesting problem becomes: can we
detect Metasploit without looking at the cert at all?
Interesting applications of JA3: Windows does not natively communicate over
Powershell unless it’s told to, so in
many environments just looking for that communication is sufficient. Even if there
are whitelisted uses in the infrastructure, still easy.
Searching for file uploads (exfiltration): a search for sustained high outbound
bandwidth, logs or triggers an event when the upload ends. Example: a Dropbox
client. But what if it’s Powershell uploading to Dropbox? A user won’t do that.
JA3S: a similar idea, but for a server Hello packet. JA3S by itself is not
specific enough to alert on, but a combination of JA3+JA3S is for Salesforce.
JA3S is not currently open-source, looking for more community feedback and
improvements. Not looking to release a JA3 blacklist since this is environment specific.
- Estimates for unique JA3 in malware vs. OS level? A: No estimates, but VirusTotal is considering adding JA3 to the signature in the paid version.
- Considered looking at other SSL packets besides Hello? A: no.
- Does TLS1.3 work as expected? A: yes, 1.3 stuff is in its own extension, so previous tools just work.
Building Absurd Christmas Light Shows—Rob Joyce
This talk was interesting because it reminded me of an older definition of “hacking”,
meaning building something interesting and cool. And complex light shows are awesome.
Rob has been investing a lot of time into nice Christmas light shows. They do not
require a lot of computing power - one Raspberry Pi can drive the overall scenario.
There are open source projects, a big community, and plenty of developers dedicated
to working on light shows.
The basic building block is a smart LED: individually addressable “pixel” that can be set
to an arbitrary color, usually controlled using a serial protocol (typically WS-2812b).
These building blocks are readily available online, preassembled and in kit form. The pixels
can be arranged in strings, shapes, arcs, and so on. There are some low-power (10-30W)
flood lights for filling up the shape of the house.
Networking things together is often done over Ethernet using the sACN (E1.31) standard.
Even if git history is rewritten, secrets committed to a git repo might still be present
in the pack files. Justin demonstrated the related git internals and presented Grawler, a tool that looks for secrets in this manner. More details
about this can be found in Chapter 10 of Pro Git.
To fully erase committed secrets, pack files need to be modified. BFG Repo Cleaner is a tool to “cleanse bad data out of repository history”,
and is also useful for removing accidentally committed big files.
Cyberlaw: Year in Review—Steve Black
This talk covered the most significant cybersecurity-related events of the past year and a
preview of important changes coming up this year. The pace of the talk was relentless and
you will want to watch the video to see all the discussion I was too slow to capture.
The top events of the past year:
- Alaska applied CFAA to the Mirai botnet. Why Alaska? “Highly dependent on internet access”
- cyber vandalism == felony, by 9th Circuit Court (defacing a website using old credentials)
- A new request for exemptions to copyright act: software, by Library Copyright Alliance (WordStar: the license expired, has copyright protection, all the Game of Thrones books were written on a DOS box using Wordstar)
- right of repair law was enacted in Massachusetts, hearings were held to apply that to the smartphone industry as well.
- hiQ vs LinkedIn: hiQ scrapped a ton of content from Linkedin. LinkedIn blocked them; court said it is not OK to block access to public information.
- Neiman Marcus, Home Depot were breached, paid out some $.
- Equifax: 145.5 million potential victims; 240 class action suits pending.
Possible mitigations for the Equifax event:
- freeze credit should be possible without a fee
- credit checks for employment might be limited
- the credit reporting agency has to fix any issues (liability switches to them)
Most litigation alleges negligence, which requires showing:
- duty: there was a duty
- breach of duty: this duty was not observed
- causation: a cyber breach was the cause of breach of duty
Duties, what can those be?
- industry warnings (pay attention to industry discussions, or be liable)
- ToS (are these terms actually upheld by the company?)
- applicable state laws
- if you knew or should have known about the problem
Need to show actual damages in some federal courts, but not in others - might go to Supreme Court.
Most significant expected events of the next year:
- new financial regulations in New York State
- “data on our citizens has to be stored in our country” - China, Russia, and others
- in some states, “hacking back” is now illegal. Legal status of honeypots?
- FBI will drop cases where network investigative tools were used, instead of disclosing the tools. If you have tools you want to keep out of the public eye, some thought needs to be put into keeping them safe when going to court.
Defending Against Robot Attacks—Brittany Postnikoff
“Those people were obsessed with how cute the robots were, and I was thinking - how can I exploit that?”
Brittany’s talk wins two of my personal prize categories at once: “The talk
where my expectations based on the title and description were the farthest
away from what actually happened” and “Most profanity per unit of time in a talk”.
Prizes notwithstanding, this talk summarizes Brittany’s research into the ways
today’s robots anyone can buy can be used to influence us and to spy on us.
What is a definition of robot? For this talk: physical; sensors/actuators; AI.
Roombas now have directed mikes, HD cameras, and are marketed as security robots.
The first exploit this enables is visible spying, and the obstacles can be as simple
as pocket change (making it hard for small robots to move around). Small robots can
also be used for hidden spying (where the robot itself is not openly visible) and for
unauthorized access - perhaps will become more serious with spread of
“delivery by roboot” services.
Humanoid robots usually have a lot more sensors than a simple Roomba,
are networked, and have more control over their operations (perhaps using
a web interface). Brittany researched persuasion, abuse of authority
(turns out people trust robots a lot), and aggressive sales tactics.
As a final harrowing note, the research shows that CS masters and graduate
students are just as susceptible to be fooled by robots as members of the general population.
Not Your Grandfather’s SIEM—Carson Zimmerman
Carson investigates the changing goals for security information and event management
tools in the age of ElasticSearch, where tons of logs and other data can be searched
There was a time when one monolithic security product was sufficient, and Security was
the only team looking at it. It was comparatively slow, and modern clustered search products
are amazing in comparison, but there is still a role for “classic” SIEM. Each business
role will have different questions of the data we’re collecting, one interface does not fit all.
Today we can’t put an agent on many of the things we must defend, and there are
too many things in general, so careful choices of resource investments are necessary
and a 100% coverage goal is not reachable. There are many different pieces that need
to integrate in near-realtime.
There are a bunch of security logging standards already; but event data is rich
(Windows: 500 event types, almost 100 different columns overall… why lose the
richness of the data?) A hybrid approach: extract only a core subset of fields,
store everything else. Also use loose coupling or a data bus for connecting
components (Kafka, RabbitMQ). Every different log can be its own Kafka topic.
A few more questions about potential pieces of the puzzle…
- What about the cloud? Will the solution meet changing needs and scale
- Is scaling needed for this component?
- what’s the integration story?
- Is it possible and/or expensive to get data out?
Q&A: Are agents good? Yes, but they are often whitelisted and attackers leverage that.
- Ten strategies of a world-class cybersecurity operations center
Max notes: Integration intentions are great, but vendors benefit from lock-in and
aim for it. In some categories, there are very few to none “building blocks” that
do limited-scope things (do not try to take over the entire security function in the
org) and play well with other systems.
CITL - Quantitative, Comparable Software Risk Reporting—Sarah Zatko et al.
The goal of this Cyber Independent Testing Lab
is to take some of the opinions and religious wars about software safety out,
and replace them with data. The general idea is similar to product labeling for
security, similar to energy consumption labels on consumer goods. This would be
achieved by developing a heuristic for estimating security without performing
expensive trial-and-error analysis, but the goal is not to identify specific
vulnerabilities. Look at binaries, not the source code. Look at the building,
not the building plan.
The goal is not to tell software vendors what to do. (Max: however, if this
becomes influential enough, the vendors will certainly optimize for metrics, so
indirectly this could be the result. See the 64-bit Q&A below for an example.)
Some of the dimensions: compile flag usage (security-specific optimizations like ASLR),
usage of “unsafe” functions like
strcpy. When a system is composed of hundreds
of pieces, showing histograms of scores across the components can be useful.
How are other dimensions computed? Static analysis: complexity, number of
functions called. Fuzzing, lots and lots of fuzzing (a testable, recognized way
to identify issues) - however, fuzzing is expensive, so CTIL wants to predict
fuzzing performance based on other indicators.
Impact so far: reports to vendors (Firefox fixes), patches to LLVM and QEMU, inspiring other
companies to start similar programs (Fedora Red Team).
Q&A: There is a “low score” category for not having 64-bit support for IoT, Why?
A: some protection measures do better in 64-bit (Max: I’m guessing because of a larger
addressable memory space)? and 64-bit environments have more safety features
and are updated more often.
Max: it would be nice to have a list of best-practice compilation flags
like ASLR, stack DEP, RELRO, heap protection flag, and so on.
Hacker Self Improvement—Russell Handorf
“Learning is fun and painful, but it’s great, pain is good”
Russell told stories about his many home projects, which can be a great way
to learn. “If you stumble into a learning experience, it tends to stick with you”,
and work-unrelated explorations can lead to work-related applications.
Some of the topics covered were building a home lab (and cable management),
owning and controlling one’s own data, log collection and PCAP all the things
at home, having a disaster recovery policy, and covert communication. Doing
those projects is a school in many of the subjects of the security field.
Some of the recommendations given were to use the Intel NUC platform (powerful and
small), Raspberry Pis (low cost), having a change management policy to avoid
household disputes, and limiting the number of projects in progress (and spending on them).
A few fun projects were explored in detail, like a high precision rubidium based
clock (the same system cellular towers use, ~$50 in parts off eBay) and a signal
intelligence system that actually helped solve a crime in the neighborhood.
- molo.ch - packet capture and indexing tool on a massive scale.
- Bro has been mentioned a bunch of times (concisely described as: netflow, but with layers 5-7 included).