Friday, January 4, 2013

Notes from 29th Chaos Communication Congress – day 4

See also day 1, day 2 and day 3.

The care and feeding of weird machines found in executable metadata

A "weird machine" = an unexpected source of computation e.g. return-oriented programming, heap crafting. This talk presents a way to perform computations through ELF relocation entries.
Why bother?
  • (Checksum of the complete executable file detects this.)
  • Philosophically, "composition kills".
  • Antivirii seem to focus on code
  • Some people apparently don't sign metadata...
  • It's not clear how to distinguish "good" from "evil" well-formed data [which is same problem antivirii have with code, nothing fundamentally different]
Interesting ELF relocations:
  • R_X86_64_COPY = memcpy
  • X86_64_64 = *(base + reloc.offset) = symbol + base + reloc.addend
  • X86_64_64_RELATIVE = *(base + reloc.offset) = base + reloc.addend
  • STT_IFUNC symbol type: instead of symbol value, contains a function pointer that is called, and return value used as the symbol metadata
=> we can use relocations as instructions, symbol table entries as variables.For jumps: modify our dynamic entry that points to the desired relocation[="instruction"], then modify dynamic linker's internal data so that the relocation table is processed "again".
Conditional branch: STT_IFUNC is only interpreted if "section header index" !=0 => can conditionally write based on that, and use it to conditionally terminate processing the relocation table (=> continue running a "different table" = target location).
Finished this exercise in spirit, with a brainfuck => ELF compiler...

OS-X MACH-O: Similar in concept. Relocation is called "binding"; relocation entries "compressed": bytecode operations to set various parameters, then "do bind" opcode. Probably can do similar things, or at very least, can "disable" functions (e.g. seteuid() to drop privileges) by changing the symbol name being looked up.
Windows PE: similar work already exists ("locreate": uses relocation entries as an unpacker, pe101.corkami.com contains a PE+PDF+ZIP combo file)

Suggested mitigations: [but there's no real reason why bother, just sign the whole file and be done with it]


  • Suggesting page-level permissions ("elfpack"?)
  • Look at limiting the expressiveness of the format
  • More loader sanity checking (e.g. prevent the relocation entries from overwriting linker's private variables or stack) might help

Page Fault Liberation Army or Gained in Translation

First summarized various creative uses of the page fault mechanism:
OpenWall: decrease code segment limit to exclude stack (need to recognize and handle gcc trampolines)
SEGMEXEC: use a 1.5G segment for data, and a 1.5G segment for code starting at 1.5G => separate page tables for data and code X restricted data space, using non-zero-base segments is very slow on modern CPUs
PaX: pages with "user"=0 => always cause a trap; then check whether it is because of EIP; if not, temporarily set user=1, read the page to fill TLB (data TLB only), set user=0 again.
OllyBone: Want to trap first execution after write (=> after malware unpacking), using same technique as PaX.
ShadowWalker rootkit: similarly, use different page frame number for code and data => rootkit detection won't see the code.
In general, "those TLB bits are memory, you can program with them".
"Labels & flow": PaX looks for implicit flows from userland to kernel. Spreads labels over the various bits in the page tables. "h2hc 2012 presentation absolutely recommended".
Contribution of this talk: a full programming environment [actually a Turing
machine with no I/O] in which "no instruction ever finishes executing":

  • Hardware task switching (task gate) used for memory storage. A task gate can be used as an interrupt handler (IDT->GDT->TSS); it reloads almost all CPU state from memory (incl. page tables); supposedly atomically, but actually dword at a time.
  • SP decrement is used for arithmetics (SP -= 4)
  • Double fault (when SP decremented from 0) is used as a branch mechanisms.

=> our single instruction: { x = y; if x < 4: goto b else {x -= 4; goto a} }
  • This single instruction is enough for a Turing-complete machine.
  • Needs one TSS descriptor per instruction.
  • Uses a different IDT per instruction.
  • "X" is stored as address of current task state.
  • "Y" are the addresses of TSS.
  • TSS "busy" bit problematic, we can overlay it over the GDT so that it is always cleared. This mechanism limits us to 16 instruction virtual addresses (but we can have more physical addresses => "16-color instruction CFG".
No publicly available emulator implements this correctly!
  • bochs can do the Turing machine, but not all.
  • Intel's Simics reboots the VM (triple-fault?)
  • KVM, Simics can be made to crash with a few changes (incl. the host in case of KVM?)
Limitations:
  • 32-bit only? "Working on 64-bit", which was cleaned up and the used legacy mechanisms were removed.
  • Not tested on AMD

Notes from 29th Chaos Communication Congress – day 3

See also day 1, day 2 and day 4.

CVE-2011-3402 technical analysis


= embedded font with a kernel exploit.



  • Earliest use in 2010, discovered in Duqu in 2011; now a fully working exploit is used in the Cool and BlackHole exploit kits.
  • Font rendering: win32k.sys executes TrueType font programs in ring 0 (motivation per NT 4.0 documentation: "faster operation and reduced memory requirements").
  • "CVT" = array of point values (a "variable storage area")
  • The TrueType VM includes a function for bitmap merging while offsetting them (to do kerning), which misses a bound check => used to set a single bit in the length of the CVT, making it possible to overwrite the global VM state which follows the CVT in memory. This is memory-layout independent: in the TTF VM code, there is a loop that flips a bit in the global state and searches for a CVT offset though which it is visible => uses the TTF VM to help with the exploit!
  • Then the VM code overwrites a function pointer in the global data (which is supposed to point to one of 6 predefined rounding functions).
  • The TrueType implementation (probably?) hasn't changed over the years => structure layout haven't changed over the times.
  • All memory accesses are relative, the VM loop detects a precise offset => ASLR doesn't help.
  • Font metadata in exploit: "copyright 2003 showtime inc. dexter regular" (reference to a TV show?).


A note from the lightning talks


"Help a reporter out": You can give interview on anything if a reporter needs last-minute experts = advertising for free.

Analytical summary of the Blackhole exploit kit



  • ~12 PHP scripts that tie together various exploits, together with reporting/management UI
  • PHP => platform independent (also requires MySQL, IonCube)
  • It seems that many of the exploits were written by someone else, and (based on a public argument) rented (and unpaid) rather than purchased (eventually dropped in 2.0, replaced by other exploits).
  • "Cool exploit kit" very similar - ripped off blackhole, or a new brand by the same author?
  • There is a tool for brute-forcing Blackhole admin passwords :) => Blackhole added a captcha
  • Author? "Paunch" There is public contact info / live tech support contact, and a public fee schedule, e.g. $1500/year.
  • Source code was leaked == copied from a running server; some files missing, IonCube-obsfuscated.
  • Exploit URL: used to be .../main.php?id=[md5 of time of exploit run] ... nowadays a little more randomized, but most of the content is always the same, with various strings unique to the exploit.


Overview of secure name resolution


Largely an overview of various approaches.

UDP spoofing: need to guess source port, transaction ID (~31 bits) => difficult for "write-only" attackers, but a local attacker that can read the requests can do it easily enough.

DNSSEC:


  • Signs records, not responses, but NXDOMAIN isn't a record=> signed NSEC response "no names between A1 and A2", which allows zone disclosure => NSEC3: "no names between hashes H1 and H2" => have hashes of all names, need an (off-line) dictionary attack to get names.
  • No existing OS stub resolver does validation (Windows interprets results by a DNS recursive resolver, but that's all).
  • Failures look like general DNS errors, user can't override them; providers blamed->Comcast is maintaining a list of failures to ignore.
  • Depends on accurate time->DoS risk, and NTP pools depends on DNS.
  • DoS amplification, but countermeasures exist.
  • ISP wild-card redirect: still possible for a TLD operator (Verisign), or when the ISP validating for the user.
  • Root zone trust: Verisign has the zone key, this is signed by an ICANN key, which has 4 HSM with copies, authenticated with 3/7 smart cards -> 3/7 physical keys

DNSCurve: on-line signatures, forwarders impossible. All 300 root servers would have to have the private key (ICANN doesn't want this); higher CPU load requirements.

Namecoin=modified Bitcoin, with every client having a full name database (=> crazy?)

EMV walkthrough


For connecting a smartcard reader, use PC/SC; you don't need a specialized "EMV" reader - it's all ISO 7816.

Nothing new otherwise, just a walkthrough through the publicly available EMV standard.

Hash-flooding DoS reloaded attacks and defenses


Attack first suggested in 1989 by Solar Designer in Phrack. Published in 2003 at USENIX. Another publication in 2011.

Possible countermeasures:


  • Use a "safe" structure, e.g. a balanced tree, for handling collisions.
  • Just discard cache entries that would cause a large collision list (if discarding data is OK).
  • Not: use SHA-3: a) it's slow, b) it doesn't work - SHA-3 is collision resistant, but (SHA-3 mod (small hash table size)) is not collision resistant.

Common response: use an application-specific secret key to randomize the hash function

MurmurHash 2: block processing is independent of seed value; the state is set to seed, then updated by incoming data => we can create pairs of input blocks that cancel each other WRT the hash state => for 16n bytes, can create 2n collisions, irrespective of the secret seed.

MurmurHash 3 (introduced as a response to the attack), we can do the same thing.

Trying this attack:


  • on Rails: need the string to pass some format checks, can just brute-force for acceptable values. For www-form-urlencoded data, Rails limits the total number of parameters => safe, but JSON data is not protected. Lesson: patching this kind of vulnerability in applications doesn't work: too much code, too many opportunities for loopholes.
  • on Java: the only issue is that you need to construct "char" (16-bit) character strings.

Both cases reported, with CVEs [what about other languages?] http://emboss.github.com/blog. Reactions: No response from Java; The Ruby problem was fixed in cruby, jruby, rubinius.

Possible fixes:


  • "Don't use MurmurHash"?

    • CityHash: even weaker than MurmurHash - can find more collisions for the same
      length of string.
    • Python's hash(): a little better than MurmurHash, but still not good: uses hash input as a key to encrypt the seed, takes this as a hash result; so if we can see the hash value, we can just decrypt the value to get the seed - and randimization is optimal anyway.
    • Marvin32 (.NET): no results for now, looking at it

  • Introduced SipHash: "fast short-input PRF". https://131002.net/siphash/

    • rigorous security analysis (peer-reviewed research paper).
    • 256-bit state, 128-bit key. Can use an
      arbitrary number of "rounds" for compression, or for finalization =>
      SipHash-X-Y naming. Proposing SipHash-2-4 for general use.
    • Strength claims: ~2128 key recovery, ~2192 state recovery, ~2128 "internal-collision forgery". With ~2s effort probability of forgery 2s-64.
    • ~1200-200 cycles per 8-64 bytes, 1.44 cycles/byte for long messages
      => < 2x slower than CityHash, SpokyHash.
    • 18 third-party implementations in 8 days already.
    • Now used in Perl 5, cruby, jruby, others

  • Why not use other cryptographic hashes? SHA-3 is much slower than siphash. "Blake" is 2-3x slower than SipHash.



The future of protocol reversing and simulation applied on ZeroAccess botnet


Introduces "Netzob":

  • Infers protocol "vocabulary" and "grammar"
  • Can simulate a client/server/fuzzing
  • Can export the analyzed protocol in various formats, including a Wireshark dissector.

ZeroAccess botnet: 2 ways to gain money: click fraud and bitcoin mining

Protocol inference:


  • Split messages to fields:

    • Find fixed-width fixed/variable fields
    • Find delimiter-based fields
    • "Sequence alignment" - find maximal-length common sequences (=> maximal-length fixed fields) for the sample, then convert into a regexp.
    • Supports hierarchical message format, with each level using a different
      method

  • Cluster similar messages. Similarity = "ratio of dynamic fields / bytes", "ratio of common dynamic bytes". Use UPGMA hierarchical clustering.
  • Find values that vary depending on context (IP address, time, ...)

Encoding (XOR, ASN.1), encryption: can define transformation functions, and add more functions; apparently must be selected manually.

Finding field relations automatically: Try various transformations of a field (or a field combination), then use "maximal information coefficient" for finding correlated values. Includes environmental context as possible information sources.

Protocol rules: collect message sequences => build automata (with probability, reaction time on arcs). Then Angluin L*a to infer a grammar.

There is GUI to help with all of this, interactively naming fields / changing display format and the like.

Notes from 29th Chaos Communication Congress – day 2

See also day 1, day 3 and day 4.


Certificate Authority Collapse


DigiNotar notes:



  • A single MS Exchange controlling all servers.
  • Administrator password Pr0d@dmin.
  • 30 software updates ignored, including some years-old ones.
  • Mitigations:

    • Dutch government overtook DigiNotar, no known legal basis(!): "On a private law basis = DigiNotar submitted voluntarily"
    • Trust revocation was delayed in the Dutch market for weeks by Microsoft.
    • The mitigation considered a success story by the Dutch government.

  • The government allowed digiNotar certificates in e-commerce (tax submissions) 11 months after the cert breach!
  • DigiNotar is still running as an intermediate CA. [What does it mean after blacklisting?]


"Systemic vulnerabilities":


  • Any CA can vouch for any domain name.
  • CAs trusted "by default" - you go through a paper trail, not audit trail.
  • Intermediate CAs "sublet" root status.
  • it's difficult to attribute an attack to a specific attacker.
  • Information asymmetry - victim can hide the risk to its users.
  • CA revocation: connectivity vs. security trade-off, and "the end user only wants connectivity".
  • Poor HTTPS implementation (see SSL pulse)


EU eSignature regulation:
  • "Regulation" => once adopted at EU, directly binding in all member states
  • Covers "trust service providers established in the EU", incl. CAs; other stakeholders (HTTPS servers) unregulated (the only known argument for this restriction is that EU organizations are also insecure).
  • CAs to be liable for "any direct damage" => better incentives, but the possible liability is very large => large insurance expense => barrier to entry.


"EU should":


  • Apprise all underlying values, incl. privacy
  • Evaluate incentives of all stakeholders X no specific problem mentioned

(The speaker suggested that the liability should be with websites instead; overall looked looked a little like a lobbying effort to ease CA burden.)

Some interesting lightning talks



  • SMrender = z programming language for OpenStreetMap data; www.abenteuerland.at/smrender/
  • GNUnet: secure p2p networking, "censorship resistant DHT". Anonymous NAT traversal (established by an ICMP reply forwarded to the inside)
  • OpenITP peer review board: "qualified" projects will be audited by commercial firms, sponsored by OpenITP.


The Tor software ecosystem


A fairly long list of various Tor software projects, asking for help (https://www.torproject.org/volunteer). Only a few highlights here:

  • "Thandy": a package installer/updater that protects against attacks on the update system, e.g. pretending that there isn't an update. Might be interesting for Fedora as well?
  • TLSDate = a time client using TLS for transport. Not a good NTP replacement, but cryptographically protects current time (=> can detect obsolete "Tor consensus" etc.)
  • Tor cloud bridge images (for amazon and others). Can be used to contribute to Tor, or for personal use (to spin up a personal bridge on Amazon when Amazon is available through the firewall)
  • Some statistics from compass.torproject.org: 31% of exit nodes in Germany, 21% in the US; a single German AS owned by CCC accounts for 19%. 500k users/day.
  • Flashproxy: any client visiting a web site is turned into a Tor bridge (!) Implemented as AJAX, which can do only outgoing connections => it connects to Tor, and to the "censored user" X still need to somehow pierce NAT.
  • Tor is 80% funded by the US government. When asked why it should be
    trusted, the answer was "It's the only system where you can read the source
    code and see whether it can be trusted."


FactHacks


Various loosely connected topics regarding RSA factorization. See also http://facthacks.cr.yp.to/.

  • "Factoring probably isn't NP-hard."
  • The sage math package can factor a 256-bit modulus factored in 500s.
  • Instead of factoring, can we just guess? There are >2502 primes between 2511 and 2512 => in the ideal case unusable, but if the RNG is weak...
  • A fairly difficult attack: buy various devices, generate billions of keys,
    check whether any of the primes divide the attacked N. This actually works:

    • In 1995, the Netscape browser generated only ~247 keys in a specific second.
    • Debian key generator bug

  • Easy attack: take two keys, look for a common factor. Use batch GCD for testing all pairs (the "Minding Your P's and Q's" paper by Heninger et al. describes this in more detail) => https://factorable.net/: found tens of thousands of keys. There was another similar paper in the same time frame.
  • More examples of bad randomness: Chou 2012: factored 103 Taiwan Citizen
    Digital Certificates (out of 2,26 million), corresponding private keys are
    stored on smart cards.
  • Overview of algorithms for factorization:

    • Trial division takes time about p/log(p) to find p.
    • Pollard's rho method: typically about sqrt(p) steps.
    • Pollard's p-1 method: also about sqrt(p) steps, but only for "smooth" primes (p-1 has lots of small factors). We can avoid such p values, but there is a generalization...
    • = "Ellliptic curve method": works if the number of points on a curve modulo p is smooth; there are many such curves => we can't exclude "weak" p values => we want to choose p and q of same size to avoid small factors (but shouldn't take p and q too close to each other, then it's possible to brute-force the area around sqrt(N))
    • Fermat factorization: always works, but O(p) steps if the primes are not close

    • ~1024-bit RSA key can be factored in ~280 operations (using number field sieve) [whatever "operation" means]. This already feasible for botnets and large organizations! It is estimated that scanning ~270 differences will factor any 1024-bit key; the Conficker botnet can do that in < a year (but it would need to remain be undetected over the year => can't use full CPU power). On the other hand, a private computer cluster (e.g. NSA, China) might do it. (NSA plans a computer center with 65 MWats => ~219 typical CPUs, i.e. 284 floating-point multiplications/year).


  • Factoring via Google: "BEGIN RSA PRIVATE KEY" site:pastebin.com (15 000 results (!)).

    • As a special case, there is a pasted key with the "middle" missing; a RSA private key may contain many precomputed values => there is redundancy, and only if you have a part of a single value of the private key, you can get the rest.

  • Lessons:

    • Stop using 1024-bit RSA.
    • Make sure your primes are big enough.
    • Make sure your primes are random.
    • pastebin is not a secure cloud store
    • ... and you shouldn't put keys even in a secure cloud store anyway.



Q/A session:


Would you recommend ECC or multiple-prime RSA?

ECC will give you much higher security level if you have performance
limitations. OTOH DSA (including ECDSA) is much worse than RSA in terms of randomness failures.

What is a recommended alternative to 1024-RSA?

For 5-10 years "very comfortable recommending" 256-bit ECC.

Can you quantify the proportion of "weak" primes?

For 1024-bit keys they are rare enough that it is not necessary to worry about it.

How do I know that my RNGs are fine for generating keys?

"Seed them": Linux has patched some problems we found. Generating keys on a general-purpose computer that has been running for a long time is probably fine; wouldn't generate keys on a low-power computer.

Can I estimate the quality of a generator from a sample of keys?

Use factorable.net, check against the internet-scraped key list; or do the same on a large sample of keys generated by the RNG under question.

How much effort would it be to upgrade to RSA 4096 or 8192 key?



  • You'll notice degradation of performance (2048 is not a big deal)

  • US govt recommends to stop using 1024-bit key as of 2010, still used everywhere, so in practice expect difficulty.

  • Financial standards impose 1984 bit maximum while at the same time requiring 4096 :)

  • 2048-bit RSA may make a busy server unable to handle the load => ECC preferred



How do I unit-test a key generator?

NIST standards for RNG testing isolate the random and deterministic parts of the code => can unit-test the deterministic part. Unfortunately, ECDSA needs RNG for each operation, so the two parts are mixed. (Story: a manufacturer had a testing step in the production that switched all devices in a group at the same time => all inputs were the same => long-term keys generated on first boot were all generated the same.)



Defeating Windows memory forensics


Memory forensics is increasingly popular:


  • It's easier to detect malware in memory than hidden on disk, there is also memory-only malware.
  • You can see resource use, connections (incl. recently terminated)
  • You can find passwords.

Memory acquisition methods:


  • Hibernation file
  • Memory dump on blue screen
  • External tools: use a kernel-mode driver to access the physical memory (often just a proxy to \\Device\PhysicalMemory, or uses low-level APIs). Such a crash dump may or may not include device memory or register contents.

Dump Analysis: "Personal choice" of what tool to use: "Volatility" framework.

Current antiforensic techniques:


  • Block acquisition: Prevent loading the (well-known) driver (E.g. a metasploit script, not available any more). Possible evasion method: just rename the process or driver.
  • "1-byte modification": Every tool has "abort factors" that break the analysis implementation => just make small modifications to the OS in memory. This doesn't let you hide anything you choose, and breaking analysis will be noticeable.
  • "Shadow walker" = custom page fault handler: data access redirected elsewhere, code execution allowed (desynchronized code/data TLB). This is "really unstable", unusable on multi-processor, code of the page fault handler can't be hidden. Impacts performance.

The weakest link in dump acquisition process: storing the dump => a rootkit can fake the dump as it is being stored. Something similar was already done in 2006 for disk forensics.

=> Newly introduced tool in this talk: Dementia.


  • Kernel implementation: can hook NtWriteFile() (not supported by MS, prevented on 64-bit), or use a filesystem mmfilter driver.
  • Detecting that a memory dump is being written by "patterns" (NtFileWrite arguments/process name/driver / FILE_OBJECT values/flags).
  • then either scan the data being written for what we want to hide (which is slow), or build a list of all "somehow related" pointers, then find them and replace them (e.g. remove the structure from a linked list); this relies on internal data structure layout but "windbg can do it" (using MS PDB symbols, DbgHelp API). Still, "traces of activity are everywhere". [In Volatility: Just deleting the "proc" allocation will fool most of the plugins - dereferencing a pointer/handle that doesn't point to a known process aborts the analysis step]
  • (The talk contained a detailed list of activity done by Dementia to hide information...)


Actually... almost all memory dump tools are copying the data from the physical memory to user mode, and from then to the file => an user-mode-only attack is sufficient to hook the dump file writing (it would still need admin privileges because the dump tool runs as admin) - but it's actually more difficult to do in user mode-only: no knowledge of kernel addresses of data structures, physical/virtual mapping (because the only known information is what has already been written, we can't do random access inside the image being written to search for information).

Limitations of Dementia:


  • Quite a few object types not hidden
  • Not even hiding itself fully
  • x64 port not yet done

Conclusions:


  • Dump acquisition tools should write image from kernel-mode: it's more secure and faster
  • Hardware acquisition methods (e.g. Firewire) preferred
  • USE native crash dumps instead of raw dumps
  • Perhaps search for rootkits first? It's not clear whether it would be effective.



Let me answer that for you


"GSM paging": paging channel (downlink only broadcast) is used to start a "service" (call/sms). Each message contains mobile identity (IMSI/TMSI) (t = "temporary" mobile subscriber identity, for privacy]. On paging receipt:

  • The phone sends a channel request.
  • The phone gets a channel assignment.
  • The phone responds to the paging message.
  • Then communication happens on the assigned channel, authenticated/encrypted. This is supposedly not possible to easily hijack, the authentication/encryption uses a SIM-based key.

We can send a bad paging response => duplicate responses on the air, typically a DoS.


  • Not limited to a single BTS, paging is sent to a wider "location area".
  • How to respond faster then the victim? Weather etc. are a factor, but the baseband latency is most important. => modified osmocombb, integrated the relevant L2/L3 code inside L1.

To identify victim address:


  • Just block everyone.
  • Use 3rd party services to lookup the number->IMSI mapping in HLR.
  • If TMSI is used: create a call, drop it soon enough (=> no ring), or send a silent SMS; then sniff for the paging request, and capture the TMSI.

Hijacking content delivery:


  • Handling encryption: India reportedly still uses no encryption. Some networks use A5/2, which is too weak. A5/1 broken now as well, almost in real time. A5/3 is not deployed now (some phone manufactures haven't implement it correctly, so deploying it would break networks).
  • Handling authentication: 50% of networks authenticate only 10% of SMSs/calls [because they earn money only by outgoing calls?] => 90% of services can be hijacked (the victim never receives it)

In O2 Germany: the hijacker sets up a channel, but doesn't respond to auth request; victim then responds to the paging request and authenticates ordinarily => hijacker is now authenticated as well!

Attacking larger areas:


  • It's enough to be within the same "location area" of the HLR, not limited to a single BTS. Based on GPS logs, ~5 Vodafone locations areas in all of Berlin (Berlin: 100-500 km2>); non-city areas even larger, seen 1000 km2. => paging DoS is way more effective than jamming such a large area.
  • To make the attack more efficient, instead of reacting to every paging request, can send a "IMSI DETACH" (ordinarily sent by the phone when shutting down, request is not authenticated at all!).
  • On a real network, Vodafone sends about 1200 paging requests per second. The current attack takes about 1 second per request ... but the necessary Motorola phones are cheap [active attackers are just not taken into account by GSM standards].

Once you can hijack a SMS, you can do MITM - there are gateways that allow specifying a sender; or just don't ACK the transmission and it will be resent to the victim.

3G: in theory using the same system, but it is on a separate network => we can't use the current GSM attacking hardware.

Countermeasures:


  • Authenticate 100% of the time to protect against hijacking
  • Authenticate paging requests to protect against DoS (=> changing this is infeasible.)




ESXi Beast


Introduced "Canape", a free-as-beer protocol testing tool, "GUI IDE", in .NET. For traffic capturing, supportws SOCKS, port forwarding MITM, application level HTTP/SSL proxy.


ESXi protocol: network management of virt products. Actually several protocols over one connection: remote desktop, file transfer, database, ...; transitioning from/to SSL over the socket over time.

The talk was mostly a demo of the tool.