Friday, January 4, 2013

Notes from 29th Chaos Communication Congress – day 2

See also day 1, day 3 and day 4.


Certificate Authority Collapse


DigiNotar notes:



  • A single MS Exchange controlling all servers.
  • Administrator password Pr0d@dmin.
  • 30 software updates ignored, including some years-old ones.
  • Mitigations:

    • Dutch government overtook DigiNotar, no known legal basis(!): "On a private law basis = DigiNotar submitted voluntarily"
    • Trust revocation was delayed in the Dutch market for weeks by Microsoft.
    • The mitigation considered a success story by the Dutch government.

  • The government allowed digiNotar certificates in e-commerce (tax submissions) 11 months after the cert breach!
  • DigiNotar is still running as an intermediate CA. [What does it mean after blacklisting?]


"Systemic vulnerabilities":


  • Any CA can vouch for any domain name.
  • CAs trusted "by default" - you go through a paper trail, not audit trail.
  • Intermediate CAs "sublet" root status.
  • it's difficult to attribute an attack to a specific attacker.
  • Information asymmetry - victim can hide the risk to its users.
  • CA revocation: connectivity vs. security trade-off, and "the end user only wants connectivity".
  • Poor HTTPS implementation (see SSL pulse)


EU eSignature regulation:
  • "Regulation" => once adopted at EU, directly binding in all member states
  • Covers "trust service providers established in the EU", incl. CAs; other stakeholders (HTTPS servers) unregulated (the only known argument for this restriction is that EU organizations are also insecure).
  • CAs to be liable for "any direct damage" => better incentives, but the possible liability is very large => large insurance expense => barrier to entry.


"EU should":


  • Apprise all underlying values, incl. privacy
  • Evaluate incentives of all stakeholders X no specific problem mentioned

(The speaker suggested that the liability should be with websites instead; overall looked looked a little like a lobbying effort to ease CA burden.)

Some interesting lightning talks



  • SMrender = z programming language for OpenStreetMap data; www.abenteuerland.at/smrender/
  • GNUnet: secure p2p networking, "censorship resistant DHT". Anonymous NAT traversal (established by an ICMP reply forwarded to the inside)
  • OpenITP peer review board: "qualified" projects will be audited by commercial firms, sponsored by OpenITP.


The Tor software ecosystem


A fairly long list of various Tor software projects, asking for help (https://www.torproject.org/volunteer). Only a few highlights here:

  • "Thandy": a package installer/updater that protects against attacks on the update system, e.g. pretending that there isn't an update. Might be interesting for Fedora as well?
  • TLSDate = a time client using TLS for transport. Not a good NTP replacement, but cryptographically protects current time (=> can detect obsolete "Tor consensus" etc.)
  • Tor cloud bridge images (for amazon and others). Can be used to contribute to Tor, or for personal use (to spin up a personal bridge on Amazon when Amazon is available through the firewall)
  • Some statistics from compass.torproject.org: 31% of exit nodes in Germany, 21% in the US; a single German AS owned by CCC accounts for 19%. 500k users/day.
  • Flashproxy: any client visiting a web site is turned into a Tor bridge (!) Implemented as AJAX, which can do only outgoing connections => it connects to Tor, and to the "censored user" X still need to somehow pierce NAT.
  • Tor is 80% funded by the US government. When asked why it should be
    trusted, the answer was "It's the only system where you can read the source
    code and see whether it can be trusted."


FactHacks


Various loosely connected topics regarding RSA factorization. See also http://facthacks.cr.yp.to/.

  • "Factoring probably isn't NP-hard."
  • The sage math package can factor a 256-bit modulus factored in 500s.
  • Instead of factoring, can we just guess? There are >2502 primes between 2511 and 2512 => in the ideal case unusable, but if the RNG is weak...
  • A fairly difficult attack: buy various devices, generate billions of keys,
    check whether any of the primes divide the attacked N. This actually works:

    • In 1995, the Netscape browser generated only ~247 keys in a specific second.
    • Debian key generator bug

  • Easy attack: take two keys, look for a common factor. Use batch GCD for testing all pairs (the "Minding Your P's and Q's" paper by Heninger et al. describes this in more detail) => https://factorable.net/: found tens of thousands of keys. There was another similar paper in the same time frame.
  • More examples of bad randomness: Chou 2012: factored 103 Taiwan Citizen
    Digital Certificates (out of 2,26 million), corresponding private keys are
    stored on smart cards.
  • Overview of algorithms for factorization:

    • Trial division takes time about p/log(p) to find p.
    • Pollard's rho method: typically about sqrt(p) steps.
    • Pollard's p-1 method: also about sqrt(p) steps, but only for "smooth" primes (p-1 has lots of small factors). We can avoid such p values, but there is a generalization...
    • = "Ellliptic curve method": works if the number of points on a curve modulo p is smooth; there are many such curves => we can't exclude "weak" p values => we want to choose p and q of same size to avoid small factors (but shouldn't take p and q too close to each other, then it's possible to brute-force the area around sqrt(N))
    • Fermat factorization: always works, but O(p) steps if the primes are not close

    • ~1024-bit RSA key can be factored in ~280 operations (using number field sieve) [whatever "operation" means]. This already feasible for botnets and large organizations! It is estimated that scanning ~270 differences will factor any 1024-bit key; the Conficker botnet can do that in < a year (but it would need to remain be undetected over the year => can't use full CPU power). On the other hand, a private computer cluster (e.g. NSA, China) might do it. (NSA plans a computer center with 65 MWats => ~219 typical CPUs, i.e. 284 floating-point multiplications/year).


  • Factoring via Google: "BEGIN RSA PRIVATE KEY" site:pastebin.com (15 000 results (!)).

    • As a special case, there is a pasted key with the "middle" missing; a RSA private key may contain many precomputed values => there is redundancy, and only if you have a part of a single value of the private key, you can get the rest.

  • Lessons:

    • Stop using 1024-bit RSA.
    • Make sure your primes are big enough.
    • Make sure your primes are random.
    • pastebin is not a secure cloud store
    • ... and you shouldn't put keys even in a secure cloud store anyway.



Q/A session:


Would you recommend ECC or multiple-prime RSA?

ECC will give you much higher security level if you have performance
limitations. OTOH DSA (including ECDSA) is much worse than RSA in terms of randomness failures.

What is a recommended alternative to 1024-RSA?

For 5-10 years "very comfortable recommending" 256-bit ECC.

Can you quantify the proportion of "weak" primes?

For 1024-bit keys they are rare enough that it is not necessary to worry about it.

How do I know that my RNGs are fine for generating keys?

"Seed them": Linux has patched some problems we found. Generating keys on a general-purpose computer that has been running for a long time is probably fine; wouldn't generate keys on a low-power computer.

Can I estimate the quality of a generator from a sample of keys?

Use factorable.net, check against the internet-scraped key list; or do the same on a large sample of keys generated by the RNG under question.

How much effort would it be to upgrade to RSA 4096 or 8192 key?



  • You'll notice degradation of performance (2048 is not a big deal)

  • US govt recommends to stop using 1024-bit key as of 2010, still used everywhere, so in practice expect difficulty.

  • Financial standards impose 1984 bit maximum while at the same time requiring 4096 :)

  • 2048-bit RSA may make a busy server unable to handle the load => ECC preferred



How do I unit-test a key generator?

NIST standards for RNG testing isolate the random and deterministic parts of the code => can unit-test the deterministic part. Unfortunately, ECDSA needs RNG for each operation, so the two parts are mixed. (Story: a manufacturer had a testing step in the production that switched all devices in a group at the same time => all inputs were the same => long-term keys generated on first boot were all generated the same.)



Defeating Windows memory forensics


Memory forensics is increasingly popular:


  • It's easier to detect malware in memory than hidden on disk, there is also memory-only malware.
  • You can see resource use, connections (incl. recently terminated)
  • You can find passwords.

Memory acquisition methods:


  • Hibernation file
  • Memory dump on blue screen
  • External tools: use a kernel-mode driver to access the physical memory (often just a proxy to \\Device\PhysicalMemory, or uses low-level APIs). Such a crash dump may or may not include device memory or register contents.

Dump Analysis: "Personal choice" of what tool to use: "Volatility" framework.

Current antiforensic techniques:


  • Block acquisition: Prevent loading the (well-known) driver (E.g. a metasploit script, not available any more). Possible evasion method: just rename the process or driver.
  • "1-byte modification": Every tool has "abort factors" that break the analysis implementation => just make small modifications to the OS in memory. This doesn't let you hide anything you choose, and breaking analysis will be noticeable.
  • "Shadow walker" = custom page fault handler: data access redirected elsewhere, code execution allowed (desynchronized code/data TLB). This is "really unstable", unusable on multi-processor, code of the page fault handler can't be hidden. Impacts performance.

The weakest link in dump acquisition process: storing the dump => a rootkit can fake the dump as it is being stored. Something similar was already done in 2006 for disk forensics.

=> Newly introduced tool in this talk: Dementia.


  • Kernel implementation: can hook NtWriteFile() (not supported by MS, prevented on 64-bit), or use a filesystem mmfilter driver.
  • Detecting that a memory dump is being written by "patterns" (NtFileWrite arguments/process name/driver / FILE_OBJECT values/flags).
  • then either scan the data being written for what we want to hide (which is slow), or build a list of all "somehow related" pointers, then find them and replace them (e.g. remove the structure from a linked list); this relies on internal data structure layout but "windbg can do it" (using MS PDB symbols, DbgHelp API). Still, "traces of activity are everywhere". [In Volatility: Just deleting the "proc" allocation will fool most of the plugins - dereferencing a pointer/handle that doesn't point to a known process aborts the analysis step]
  • (The talk contained a detailed list of activity done by Dementia to hide information...)


Actually... almost all memory dump tools are copying the data from the physical memory to user mode, and from then to the file => an user-mode-only attack is sufficient to hook the dump file writing (it would still need admin privileges because the dump tool runs as admin) - but it's actually more difficult to do in user mode-only: no knowledge of kernel addresses of data structures, physical/virtual mapping (because the only known information is what has already been written, we can't do random access inside the image being written to search for information).

Limitations of Dementia:


  • Quite a few object types not hidden
  • Not even hiding itself fully
  • x64 port not yet done

Conclusions:


  • Dump acquisition tools should write image from kernel-mode: it's more secure and faster
  • Hardware acquisition methods (e.g. Firewire) preferred
  • USE native crash dumps instead of raw dumps
  • Perhaps search for rootkits first? It's not clear whether it would be effective.



Let me answer that for you


"GSM paging": paging channel (downlink only broadcast) is used to start a "service" (call/sms). Each message contains mobile identity (IMSI/TMSI) (t = "temporary" mobile subscriber identity, for privacy]. On paging receipt:

  • The phone sends a channel request.
  • The phone gets a channel assignment.
  • The phone responds to the paging message.
  • Then communication happens on the assigned channel, authenticated/encrypted. This is supposedly not possible to easily hijack, the authentication/encryption uses a SIM-based key.

We can send a bad paging response => duplicate responses on the air, typically a DoS.


  • Not limited to a single BTS, paging is sent to a wider "location area".
  • How to respond faster then the victim? Weather etc. are a factor, but the baseband latency is most important. => modified osmocombb, integrated the relevant L2/L3 code inside L1.

To identify victim address:


  • Just block everyone.
  • Use 3rd party services to lookup the number->IMSI mapping in HLR.
  • If TMSI is used: create a call, drop it soon enough (=> no ring), or send a silent SMS; then sniff for the paging request, and capture the TMSI.

Hijacking content delivery:


  • Handling encryption: India reportedly still uses no encryption. Some networks use A5/2, which is too weak. A5/1 broken now as well, almost in real time. A5/3 is not deployed now (some phone manufactures haven't implement it correctly, so deploying it would break networks).
  • Handling authentication: 50% of networks authenticate only 10% of SMSs/calls [because they earn money only by outgoing calls?] => 90% of services can be hijacked (the victim never receives it)

In O2 Germany: the hijacker sets up a channel, but doesn't respond to auth request; victim then responds to the paging request and authenticates ordinarily => hijacker is now authenticated as well!

Attacking larger areas:


  • It's enough to be within the same "location area" of the HLR, not limited to a single BTS. Based on GPS logs, ~5 Vodafone locations areas in all of Berlin (Berlin: 100-500 km2>); non-city areas even larger, seen 1000 km2. => paging DoS is way more effective than jamming such a large area.
  • To make the attack more efficient, instead of reacting to every paging request, can send a "IMSI DETACH" (ordinarily sent by the phone when shutting down, request is not authenticated at all!).
  • On a real network, Vodafone sends about 1200 paging requests per second. The current attack takes about 1 second per request ... but the necessary Motorola phones are cheap [active attackers are just not taken into account by GSM standards].

Once you can hijack a SMS, you can do MITM - there are gateways that allow specifying a sender; or just don't ACK the transmission and it will be resent to the victim.

3G: in theory using the same system, but it is on a separate network => we can't use the current GSM attacking hardware.

Countermeasures:


  • Authenticate 100% of the time to protect against hijacking
  • Authenticate paging requests to protect against DoS (=> changing this is infeasible.)




ESXi Beast


Introduced "Canape", a free-as-beer protocol testing tool, "GUI IDE", in .NET. For traffic capturing, supportws SOCKS, port forwarding MITM, application level HTTP/SSL proxy.


ESXi protocol: network management of virt products. Actually several protocols over one connection: remote desktop, file transfer, database, ...; transitioning from/to SSL over the socket over time.

The talk was mostly a demo of the tool.

No comments:

Post a Comment