AtlSecCon (Atlantic Security Conference) 2019 notes | MaxVT's Home On The Net

AtlSecCon (Atlantic Security Conference) 2019 notes

AtlSecCon is a mid-size conference held annually in Halifax, Nova Scotia. It has the usual security conference assortment of free time activities (a CTF competition, a lockpick lab, an Internet of Things lab, and a very photogenic lab from St John’s Ambulance therapy dogs program.) A five-track talk schedule means there’s something interesting for everybody, and using a proper conference center means the vendor area is not as crowded and hot as that of conferences that run in hotels.

Opening Keynote—Clifford Stoll

Cliff is a force of nature and this talk has to be seen in video, words can’t do it proper justice. Hopefully a recording will become available. Until then, you can read The Cuckoo’s Egg.

The deck for the keynote is an original deck first presented several decades ago, printed on transparencies and intended to be used with an overhead projector. Apparently there were none of these left in all of Nova Scotia, but I was wondering if asking any public school’s maintenance person would have procured one from the basement.

People-Centric Security: 2019 Cyber Threat Landscape—Luigi Avino

The one-sentence summary of the threat situation in 2018 and Luigi’s predictions for 2019 would be “email and account credentials”.

In 2018, Canada ended up as #4 (behind the US, India, and the UK) in number of cybercrime victims, despite its small population. Email attacks were so popular that email authentication started becoming a requirement. Malicious URLs were used in 80% of email based attacks, and banking trojans were used in over one half of malware payloads. Companies began getting targeted with compromised customer accounts (a customer gets hacked and a fraudulent request in their name is submitted to the target).

Luigi’s predictions of malicious activity for 2019:

  • focus on quality over volume; increased geo-targeting (redirects to malicious or benign destinations based on IP). Geo-targeting maximizes ROI and is stealthier.
  • a lot of account credential phishing, perhaps via lookalike domains and social media lookalikes. Office365, Google, etc. credentials become the master key as all work moves to the cloud: access to the account could mean persistent access to data. Average time before detection is currently 2 weeks.
  • still plenty of examples of Office (macro) exploits.
  • BEC (business email compromise) attacks: no malicious payload, instead social engineering: getting a human to do something.
  • coin miners and ransomware will continue. New twist on ransomware: “here’s the proof”, the proof is an infected document.
  • state-sponsored activity will not try to hide anymore, given open geopolitical tensions.

Defensive moves for 2019:

  • year of “user risk modeling”; organizations will apply risk model tools and data to their staff, not devices or infrastructure. Focus on “Very Attacked People” - executive assistant to the CEO, IT admin, disgruntled users.
  • Email is still king. Granular policy, filtering, email authentication, dynamic classification will help.
  • Assume that users will click: attackers develop new techniques while users are being trained on what’s currently in use. Try to identify and block threats before they land in the inbox or are sent out (“outbound threats”).
  • Look for social media lookalikes on Twitter, Facebook, etc. - those are a new vector for phishing, contacting or replying masquerading as an official account.

Impact of GDPR on defense: GDPR prevents looking up domain ownership information online, time to obtain that information has increased to weeks.

Avoiding Common Mistakes in Breach Detection and Response—Ben Smith

The first half of this talk was “common mistakes” and the second half sounded more like advice for improving an organization’s security posture.

Among the mistakes:

  • Lack of practice and exercise of the incident recovery plans.
  • Processes are not up to date (plan is over a year old).
  • Not involving legal, PR, possibly compliance, IT in exercises and process documentation. All of these functions are likely to be needed during an incident.
  • Using the standard, widely known communication method (like a conference call) during a security incident. There have been cases of attackers with inside information joining and listening to the incident call.
  • Not knowing the true impact to the business: What are the organization’s key assets? If everything is important, nothing is.
  • Committing all available resources to fight one fire - there could be more than one ongoing incident, and the other one could be more important.

Ben discussed supply chain risks and partner-to-target attacks. When a third party receives too much access, this could become an attack vector. Supply chain compromise is a variant of a trusted partner attack: for example, a vendor distributing updates, an update was signed by a stolen key and delivered to customers. Website of a restaurant near a HQ was compromised and a malware infected menu PDF was uploaded. Are restaurants part of a supply chain for a business?

NIST’s “Framework for improving critical infrastructure cybersecurity” added a section on supply chain risk in version 1.1.

Other advice on improving security posture:

  • Proactively hunt threats instead of react. Reacting leads to alert fatigue, false positives, spending all the time on analysis.
  • Right skillset for security: curious, passionate, loves to unravel mysteries. If you want to see these qualities in action, watch Cliff Stoll’s keynote!
  • Knowledge of the environment (asset management). Carve time for people to get periodically re-acquainted with changes in the environment. Comprehensive visibility leads to situational awareness: network traffic analysis is a part of that.

See also:

  • Some companies have formalized and published their incident response documentation. This makes for interesting reading and lets you compare your practices with others. Here’s PagerDuty’s.

Schrodinger’s Pentest: Scoping Entanglement—Laurent Desaulniers

I missed the beginning of that talk. Two topics I found interesting were the descriptions of various activity types, and some “red flags” / unethical requirements that indicate it’s time to bail.

Examples of problematic behaviors before/during pentests:

  • preparing (telling folks about upcoming tests, shutting off legacy systems…)
  • subverting the idea, “satisfying the auditors” instead of looking to discover real problems.
  • Controlling the process too much, requiring detailed information and stopping all pentest activity as soon as something is found.

Different types of pentesting / consulting activities:

  • vulnerability assessment - a quick automatic scan. This should be a step that precedes a pentest; works on a more fundamental / hygienic level, and if there are issues on this level the extra effort and cost of a pentest might not be worth it.
  • pentest - humans looking for exploitable issues in your application and systems. Vulnerability chaining is perhaps the most interesting part of pentesting. Combining multiple low level vulns to create a high impact exploit is something that only humans can do. Pentest activities are not meant to be stealthy - they are done quickly and noisily.
  • red team - simulates an adversary to train your response team (blue team). A long-term engagement. The result of a red team activity is an evaluation of blue team’s performance. Red team work is not meant to identify vulnerabilities.
  • purple team - collaborative training of your blue team. Very expensive. Improve knowledge and detection/protection mechanisms. Choose the correct threat model for this to be effective: are you expecting script kiddies? opportunistic attackers? state actors?

Laurent had two tips for the audience: look at legacy/unused systems (“your main webserver is not that interesting to us. You know that server, you give it love.”) When an engagement is complete, ask for a presentation - the results should also go to the C-suite, don’t leave them at the technical level. That’s how one gets budget for future security activities.

Closing Keynote for Day 1—Cory Doctorow

“…it’s important to remember that once a freedom is removed from your Internet menu, it will never come back. The political system only deletes online options–it does not add them. The amount of Internet freedom we have right now is the most we’re ever going to get.”—Douglas Coupland (2016), Bit Rot, p. 117.

Cory launched off into a well-rehearsed chronicle of freedom-deleting technology policy events of the past year and the resulting “Rapid China-fication of the Internet”:

  • Australian ban on working cryptography (by requiring service providers to decrypt or intentionally weaken encryption on demand),
  • European filter requirements (Article 13) in 2019 removed safe harbor for copyright and introduced liability for service providers.
  • Australian, EU, and UK requirements for removal of offensive, violent or terrorist content within 1hr (law in Australia, UK and EU considering).
  • FOSTA/SESTA made providers explicitly liable for some categories of speech and resulted in immediate shutdown of all related services in the US, because the risk became unmanageable. See also age verification systems in the UK. Offshore systems that fail to comply will be blocked at the border.

In a year, the Internet went from speech that was presumed innocent (Safe Harbor), to speech that must be filtered through black boxes before being allowed. In a large part this happened because we dislike the large social platforms; however, these laws will only strengthen the largest social platforms.

Anna went over Canadian laws, court cases, and examples relevant to cyber security professionals. The story of a 19-year-old accessing information left accessible on a government website and subsequently getting arrested was mentioned by a number of presenters.

In Canada, section 342.1 of criminal code covers a majority of cybersecurity activities; section 430 covers “mischief” (destruction or change of data, DoS). For 342.1, intent (“colour of right”) is important. You can be fired and sued for unauthorized investigation and reporting of vulnerabilities.

CISSP and other certifications are not laws; they are brands, trademarks. However, following industry standards and training makes it more likely that the intent is not malicious.

It’s very expensive for companies to move beyond the “scary letter” stage. However, responding to these letters is a much better tactic than ignoring them.

What Are We Doing Here? Rethinking Security—Jeff Man

Security is thought of as a noun, the state of being secure. Jeff proposes that it is a verb: awareness of the normal state of your environment, monitoring, activities done on an ongoing basis. The mind map shows a wide range of activities associated with security. Most of the time, the focus is on data security (confidentiality, integrity, and availability).

risk = vulnerabilities/weaknesses (bugs, processes, people) + threats (enemies) - countermeasures (mitigation: detection, response, recovery)

In the equation above, “security” is not present. The industry is mostly focused only on the vulnerabilities aspect, said Jeff; but vunerabilities are not going away (some of those systems have been around for 20 years and new issues are still being found in them). In the commercial world, risk is multiplied by value of data. In military and government, the value is human life and money is not a problem generally.

See also:

Automation Without Exposure - Securing Your DevOps Pipeline—Jeff Hann

Jeff’s talk was about the various tools added to development and deployment pipelines. Different points in pipelines to add tools:

  • Build stage: linting, static analyis (SAST) - check for common stuff and top OWASP.
  • Binary: open source scans, executable analysis (BAST).
  • Deploy/runtime: dynamic analysis (DAST), monitoring.

These tools should not be seen as checklists or roadblocks. Each finding is a training opportunity, once it is resolved our skills improve. Positive feelings towards these tools are conditional on the tool actually delivering true positives and actionable results. When certain classes of bugs crop again and again, training can be done to focus on these topics.

CVSS is one way to prioritize multiple findings. There are different versions of that scoring.

Lessons learned from over a decade of bug hunting and disclosure—Eldar Marcussen

Eldar shared some personal experience finding and reporting bugs, and sometimes tracking the same bug for multiple years.

When discovering vulnerabilities, reporting is usually the most lengthy step. Finding and verifying an issue takes comparatively little time. Eldar does not like the term “responsible” disclosure, prefers “coordinated” disclosure - this does not assume an ethical or moral position. Vendors often use “responsible” to mean “private”. No matter how disclosure is done, the only thing that keeps users safe is fixing the issue.

His personal approach: if a vendor doesn’t have a disclosure policy, ask them how they prefer to receive the vuln info. Also provide the disclosure date in the initial email.

Sometimes the same vuln can remain open for years - an example of a 2014 issue that was fixed 3 years after, but with the vulnerable version still available for download; and another issue that is still present after 4 years.

What Did You Do So Wrong (you think you need a firewall in the cloud)?—Kellman Meghu

Kellman spoke about trying to implement firewall solutions on highly dynamic architectures on public clouds, and gave some examples of architectures where access and control is aggressively revoked so that the system needs fewer firewall-style restrictions. However, I understood some of those examples also as cautionary tales, as in one case a system was effectively “runaway” with no visibility into it and no way to exercise control. That system had to be shut down and reset.

There’s no physical firewall appliances in the cloud, they are usually VMs or sidecars. Firewall vendors are working on improving their firewall startup times. However, better asynchronous design patterns mean that services start up much faster as well. Firewall vendors continue losing this race: in one example, with a firewall sidecar containers were failing and restarting because firewalls were not ready. The software had to be modified to deal with network unavailability during startup.

Firewalls are often used for privileged or human access. If that access doesn’t exist, a lot of problems go away (SSH, VPN, X forwarding…) Sometimes, a non-traditional control channel (perhaps Twitter?) can be used instead. In a highly dynamic environment, human access is less useful: failing containers get automatically restarted. This can make compromising them more difficult (a service issue results in container being killed and a fresh one started instead). Authentication between services can be done using an internal CA.

There was a mention of trust as a dynamic, multidimensional score instead of an absolute. This sounded a lot like Zero Trust and BeyondCorp, but the idea was not developed further.

The auditors are not liking the absence of firewalls, there’s much convincing to be done. But the industry will change.

Closing keynote for Day 2—Tanya Janca

Tanya’s closing keynote was all about DevSecOps: integrating DevOps workstyle into security activities, and integrating security into DevOps workflows. The big themes were trusting the developers, collaborating with them, and not blocking their processes with mandatory, slow checks.

The goals of DevOps are closely aligned with security goals. More frequent releases mean faster security fixes. Easy rollbacks and fewer bugs making it through mean better availability. Faster time to market means the business is viable and there’s money to invest in security.

Secure patterns should be the default:

  • “paved road principle” - the easiest way is also the secure way.
  • “top answers from StackOverflow are insecure, we need to give better samples” (Max: I would suggest engaging on SO to fix those answers, not only trying to give good samples within your organization.)
  • develop common patterns/libraries. secrets, crypto, outdated/deprecated algorithms.
  • negative testing: make sure errors are handled deterministically and well.
  • turn pentest results into unit tests to make sure found issues do not surface again.

More collaboration between developers and security:

  • make sure dev and ops are not waiting for security. Kick off separate security pipelines with non-blocking, long-running activities (“in-depth testing” in Tanya’s words).
  • invite dev/ops to security specific activities.
  • setting aside time for devs asking security questions (“office hours”).
  • if developers want to go around security rules, they will. Developers are trusted, so give them the ability to do their jobs and utilize that trust. As part of that, let them use / buy them licenses to security tools.