ShmooCon 2017 report | MaxVT's Home On The Net

ShmooCon 2017 report

ShmooCon is a security conference held yearly in Washington, DC. With around 2000 participants, it has an audience large enough to fill 3 tracks and provide a range of extra-curricular activities, yet it is much less crazy than events like DefCon (which is ten times the size).

However, with limited attendance, tickets are rather difficult to obtain. Beginner’s luck was on my side this year. In recognition of the ticket problem, there is a concurrent, informally named “LobbyCon” in the hotel lobby (where a ticket is not required) and people who didn’t get tickets come and participate in the “hallway track”, engaging in conversation and meeting with friends.

The air is rather informal, with a bunch of fun customs, ice-breaking challenges in extracurriculars, and NSFW jokes. The attendees are not a particularly diverse bunch (in my opinion, that is reflecting the composition of the general security community), but there was some Twitter traffic that suggested various less-represented groups organize and meet during the conference. The CFP process is intentionally not blind, and the selection team prefers new speakers; there are opportunities to sponsor students to attend as well, and some universities on the East Coast are traditionally well represented.

Videos of the talks have been posted.

If you do not have enough time to read everything, I thought the following five talks are the most interesting:

Talks

Opening Remarks, Rumblings, Ruminations, and Rants

The opening “rant” was about IoT security (no surprise). Bruce Potter suggests putting all IoT stuff behind a VPN/firewall. In his opinion, the real danger is not in IoT devices themselves, but in the central servers that control the individual devices. I think Mirai proved that individual IoT devices can cause formidable damage once enough of them are organized together; which is a similar point, because the difference is that instead of the original control servers there are now attacker-controlled control servers that instruct the millions of devices.

U2F Zero: Secure Hardware Design, DIY Mass Production, and Amazon Prime—Conor Patrick

Conor built a U2F key that is fairly similar to Yubikeys, for $8 shipped from Amazon. It has become remarkably inexpensive to build and ship hardware in numbers: 1100 units cost him just close to $4k to build. Amazon takes care of shipping via Amazon Fulfillment, and “Small and Light” program costs around $2 per unit. The hardware includes a random number generator chip and a crypto chip. Plenty of crypto chips available do not support the crypto operations needed for U2F, only symmetric (AES) encryption. Conor shared some challenges he had with Amazon to be recognized as a legitimate manufacturer and seller of a brand new, previously unknown “brand” of devices.

35 Years of Cyberwar: The Squirrels are Winning—Space Rogue

The cyberwar, or a crippling attack on US (or another big state)’s energy grid, has been prophesied and anticipated for more than 35 years now ([a recent representative sample](http://www.nbcnews.com/news/us-news/bracing-big- power-grid-attack-one-too-many-n329336)), but nothing of the sort has actually happened. All the damage attributed to cyber attacks is dwarfed by the amount of damage done yearly by squirrels and other small animals – this is why the squirrels are winning. Even when the cause has been attributed publicly, no proof was released (“pcaps or it didn’t happen”). Space Rogue makes a point that it might be easy to bring power down for short periods of time, but much more difficult to actually keep it down; and doomsday scenarios such as “a coordinated attack on 15 substations will put out power for a year” requires physically destroying those sites, a goal out of reach for just about anyone. A generalization leads to discussion about realistic evaluation of risk and focusing security efforts on things that are actually likely to happen or are proven to be happening, as opposed to pie in the sky doomsday scenarios.

Implantable Logic Analyzers and Unlocking Doors—Kenny McElroy

Relative insecurity of RF “badge” access systems, many of which are not encrypted, is generally known. Somewhat less known is the fact that even secure versions and more advanced systems like biometric, PIN code, etc. often put all of the advanced logic into the terminal (the device on the wall) and use the same unencrypted protocol over the wire to the system controller (that triggers the doors) for backward compatibility. Combined with ease of access into the wiring (some terminals are held onto the wall by one screw…), this is an attack vector that Kenny could not resist. He demonstrates a small device that clips onto the data and power wires behind the control panel, captures all of the unencrypted traffic, exposes the captured data over wireless interface, and allows replay on demand.

Keynote—Dr. Gary McGraw

“There’s a lot of bullshit in cyber, but there’s even more bullshit in what people think CISOs think and do. My God, the stuff those people write down is just absolute drivel, written by marketing people who aren’t CISOs, have never met a CISO, and don’t even know what CISOs do. But they make up some compelling nonsense.”

The keynote covered a lot of ground but I struggled to find the thread binding the various topics together. The presenter talked about developing a career in security (“develop a rhytnm; it takes time - trips, talks, press, keynotes”). It takes commitment: 30+ trips a year at the peak. Gary believes that building and breaking activities are equal partners in security, and one must train in both to be effective. He believes it is not possible for someone who has not had practice in breaking systems to build one that is secure.

Some interesting resources mentioned in the talk:

  • https://www.bsimm.com/about/ - “a study of real-world software security initiatives organized so that you can determine where you stand with your software security initiative and how to evolve your efforts over time”, now in its 7th version.

  • Avoiding the Top 10 Software Security Design Flaws by IEEE Center for Secure Design

  • An upcoming CISO Report (Four CISO Tribes and Where to Find Them, described as BSIMM-like system for CISO, coming in Q1 2017)

LangSec for Penetration Testing—Sergey Bratus and Falcon Darkstar Momot

This is a more theoretical talk based on this paper, with plenty of embedded practical points. It posits that complex protocols and languages are bad for security, and that to evaluate the dangers of a particular protocol implementation it is way more efficient to review the general design and see departures from safe design practices than to find bugs (“proof of exploitability”) one by one; also, fixing such bugs one by one might not actually address the root cause of poor design.

Bad patterns:

  • Non-minimalist input handling code. Want to see a clear code boundary where the input is recognized and only valid input is passed on into processing. When this is not done (no clear boundary), this is a “shotgun parser” - parser and validation logic are intermixed with business logic. Also do not eval() any input before validation.
  • Input language more complex than deterministic context-free. Endless cycles in protocols: deciders vs. parsers. In general, input validation is not decidable for Turing or LBA-based languages. Try to keep protocols to regular or context free. A common way to remove context freeness and to raise the parser complexity is to add connections/rules/references between different fields.
  • Differing interpretations of input language: multiple parsers that disagree.
  • Incomplete protocol specification.
  • Overloaded fields in input format (same name can mean different things in different contexts).
  • Permissive processing of invalid input. Reject all invalid messages immediately.

Who Wants to Allow Arbitrary Code Execution On Their Boxes? We do!—Brian Redbeard / Patrick Baxter

Redbeard works at CoreOS SRE team, they run quay.io; this talk is about builds that run on quay.io to produce (trusted) images, therefore the title: Quay runs arbitrary code, from untrusted sources, and still has to persuade the next customer that their build on the same hardware will output something trustworthy. How does it work?

When Redbeard joined Quay, the process was spinning up a new EC2 instance per build. However, building is fairly fast (minutes), and then the instance dies, but AWS still bills per full hour. Restarting an instance and reloading an image for a clean slate, using some of the trusted tools mentioned later to have some assurance of non-tampering? Still a full redeploy, better but still slow… so how about containers? Docker docker docker, right?

Containers are not lightweight VMs. But what if they could be?

KVM is a kernel driver; it does not run VMs, it accelerates VMs. KVM is also not a hypervisor. QEMU-KVM runs VMs using KVM. KVM does not provide “virtual hardware” (BIOS, network cards, disk controllers) so KVM can only be used if the OS kernel supports KVM; otherwise emulated hardware has to be used. LKVM - project of CoreOS + Intel. Hardware abstraction, not HW emulation. Removes support for emulated devices; if KVM does not support the abstraction, the guest OS does not have access to the relevant hardware. This is fine for Linux.

rkt is a CoreOS container runtime. It can optionally run with “stage1 of LKVM”: only one child process, which is a LKVM wrapper and a separate Linux kernel. No further introspection into this guest from the host, because now this is a full blown VM. Removing stage1, the same runtime (rkt) runs that container in the usual fashion, giving much better visibility and using the host kernel. Either has the same features: cgroups, namespaces, capabilities. LKVM slip-streams in a new kernel at runtime to an existing container image, which is the full userland. So we can choose between sharing a kernel and instantiating our own dedicated kernel.

No AWS; switched to packet.net - bare metal per-hour hosting; interestingly, they offer TPM (trusted computing modules). This permits usage of something like DM-Verity - kernel block integrity checking and other tools to cryptographically secure hosts starting from bootloader and up.

Quay.io: on packet.net. Kubernetes over Packer images. Builds themselves use LKVM. Having containers that run VMs inside containers might feel weird but that’s actually exactly how Google Compute Engine works.

Switching to Calico instead of Flannel and bridge/L3 IPVLAN and ebtables (L2 iptables).

IoT Ecosystem and Regulations Trying to Control It—Whitney Merrill / Aaron Alva

Everything is Internet of Things. “X with Internet and BIG DATA”. How do we make it safe, with good practices and regulation?

Fail dumb + safe: no connectivity, company dies, DDoS on command and control.

Standard protocols are beginning to emerge: Bluetooth low energy, but are there security & privacy concerns? For example, many consumers change device names to include their name, and that is continuously beaconed out.

Updates? Nothing is repaired anymore, it’s not worth the effort. Nothing is updated either. Automatic updates are gaining traction in consumer devices.

Regulations:

  • Need a DMCA exception to allow people to work on IoT research without being sued.
  • FTC brought several cases vs. IoT companies (Trendnet)
  • Some laws do not anticipate evolution of technology: California Smart TV law (limits listening) does not apply to Alexa et al. because these are not TVs.

Reading material:

Flailing is Learning: My First Year as a Malware Analyst—Lauren Pearce

Lauren spoke about her experience and challenges of the first year on the job in a security field.

Cybersecurity has traditionally been a teach-yourself field, and communication between practitioners was difficult (magazines, then conferences and Internet started happening). Some colleges have started to offer degrees, but many courses only cover the basics and the field is still very much teach-yourself. Books help, but the really advanced stuff (current threats, including the advanced-persistent variety) is not covered by any existing books. So experienced people in the industry can have the approach “I figured everything out by myself–so should you”.

Lauren was the only female on her team. This is a confidence issue approaching potential mentors; it can also be undermining, because of a perception that “you’re here because you’re a woman, not because you’re qualified to be here”. She also says the team was undergoing some rough times so there was a lack of leadership and few people to ask.

Colleagues can recommend tools that help with not reinventing the wheel: Lauren developed a method of clustering malware samples by types and her mentor suggested Yara for pattern matching in code. The workflow became

analyze → create yara rules → cluster samples → pick one sample of each type to investigate in depth → report → back to analysis

“Value your fresh eyes - people already in the organization could be locked in their ways (and dysfunctional processes). New people challenge bad workflows and develop new tools that become more widely used.”

[Practical Malware Analysis](https://www.amazon.com/Practical-Malware-Analysis- [Hands-Dissecting/dp/1593272901) - a book Lauren says helped a lot.

Own The Con—The Shmoo Group

This is a yearly talk about ShmooCon itself: organization, CFP process, money, and so on.

Planning for next year’s con starts the previous year. Most communication happens by email/phone, not using Slack or similar. ~90 volunteers to run the conference. 2200 attendees, approximately 1450 tickets were released to the public (speakers, volunteers, students, sponsors use up the rest). At the time of the talk, 1900 people have registered. Free events do not have such commitment to attend.

15% of CFPs were accepted. If a talk spans multiple tracks, write in which changes would be made for different tracks - this makes it easier to evaluate. Do not send slides; they do not provide the necessary information and it takes too long to go through them, since the first look at a submission takes at most 4-5 minutes. Follow the CFP instructions carefully, many people do not.

Ticket sales: landing page took ~3k hits/second, 45Mbps out for a text only page.

User Focused Security at Netflix: Stethoscope—Andrew White and Jesse Kriss

““Make it turn green” is a sufficient motivation for many people”

Stethoscope is an open-source Netflix [tool for self-service BYOD compliance auditing.

Netflix has a ton of employees, devices, worldwide offices. BYOD is accepted - “just buy any machine from Apple Store and expense it”. The culture document is open to the public, “freedom & responsibility”. However, any cultural values are embedded in and communicated by systems, tools, and procedures - not just people and documents.

Stethoscope’s goals:

  • Self-service
  • Actionable (don’t show things users can’t do anything about)
  • Avoid forced updates and company-wide e-mails
  • Nice stickers, because people love stickers.

Python, twisted + Klein; React and Nginx. No persistence layer; queries go to original data sources:

  • Windows: Landesk
  • Mac: JAMF
  • Linux: OSQuery
  • Mobile: MDM
  • Scanning suite, for (Mac?) - Carbon Black
  • Authentication: OpenID

ripr–Run Slices of Binary Code from Python—Patrick Biernat

Problem statement: need to run a chunk of binary code, perhaps a function, to implement some pwning. Re-implementing some function in a higher level language to achieve exactly the same result is usually not very interesting. This could be very useful for CTFs, where time is at a premium.

Unicorn Engine: a CPU emulator (“scriptable QEMU”). Binary Ninja: a reversing framework, uses an intermediate language to be architecture-independent. The current system has difficulty emulating imported code (DLLs etc.) and syscalls (code being run is not part of target binary, and potentially could branch very deep within the kernel / be dependent on state). (note: this gives some interesting obfuscation ideas!)

Github repo of the project

Excuse me, Server, Do You Have the Time?—Brian Cardinale

The talk is about weak, time-dependent tokens, usually in web apps. My main takeaway is that it’s difficult to reverse engineer this, except in a few fairly simple cases.

Don’t use time as the (sole) source of entropy; an easy way to check might be to hammer a service and see if repeated tokens are returned for the same second / ms… Which time format might a developer use?

Obfuscation: HMAC requires additional keys, difficult to reverse. GUIDs, type 1–this is terrible, since it’s time dependent and format is known; MongoDB and MySQL use type-1 GUIDs (by default?)

How to figure out the server’s local time? In caching data, also perhaps with Javascript, random variables etc.

Firetalks

Firetalks are 15-minute presentations, often from first timers, that run late in the evening. They are a lower pressure way to present a security related topics, and are typically not streamed.

  • How To Spoil All Movies: all movies and compelling presentations follow the same pattern. Telling stories on purpose is what humans do, it makes your life easier. Remember: Hook, Beginning, Middle, Climax, End.

  • Quick&Dirty ARM Emulation: rewrite ARM firmware to look like a Linux executable, then it can be ran and interacted with without getting into the details of underlying platform emulation. Few people fuzz firmware, this is quite useful for the purpose (can be fuzzed with standard Linux tools)

  • Graph DB for Infosec: presumably, it’s like Maltego but in a VR environment? Presenter thinks RabbitMQ is nice and works out of the box. Pentoo is another security / pentesting oriented distribution, based on Gentoo. /r/netsec is a good Subreddit for security minded folks.

  • 22 Short Films: (mis)adventures in CFP submissions

“The ability to [file a] bug report and persuade peers is just as important as technical skills”

Most organizations are not going to find enough value investigating every statistical anomaly on their systems.

Extracurriculars

  • Shmootris–a set of challenges around codes, encryption, and security. Can be played alone or in teams. The winning teams got tickets to ShmooCon ‘18. One of the series of challenges starts with clues found on different conference badges.

  • Shmooganography–there are messages around the con, hidden in plain sight. Discover and decode them, win prizes.

  • Labs–come a day early, help set up the conference network, play with the latest network hardware and management tools, including self provisioning, threat monitoring, etc. The main WiFi network ShmooCon rolls out is certificate based WPA2-Enterprise. There are also other less secure networks for experimenting and exploring.

  • Barcode Shmarcode - find an innovative way to display your conference ticket barcode, using a particular theme (this year: games) and get it to successfully scan to win prizes. Seeing DOOM run on an Cisco IP phone was nice.

  • Wireless CTF - a Capture The Flag team competition focused on wireless (WiFi and radio) challenges. A $40 SDR (software defined radio) dongle is enough to solve most of them, according to the organizers.

  • Lockpick Village

  • Hack Fortress - Team Fortress 2 contest

  • DeCSS Scan - help scan and make digitally available documents from the DeCSS trial.