Search This Blog

Showing posts with label troubleshooting. Show all posts
Showing posts with label troubleshooting. Show all posts

Tuesday, May 27, 2014

How to use F5 Wireshark Plugin for LTM troubleshooting

In this post we are going to look how to use F5 Wireshark Plugin to troubleshoot networking issues on BigIP LTM.
  • Download the and install the plugin in your Wireshark
The full instruction are here F5 Wireshark Plugin. In essence you needed to copy the f5ethtrailer.dll file into C:\Program Files (x86)\wireshark\wireshark16\WiresharkPortable\ and restart my Wireshark.

Once you restart wireshark go to menu Help - About Wireshark, Plugins tab. You should be able to see the plugin listed there if properly installed.

  • The plugin is useful only if you take a capture on LTM with 'noise' information.
The noise is an internal information that TMM is attaching and managing for every packet when is being processed. To have a capture with noise these are the minimal options you need to specify:

tcpdump -w /var/tmp/capture.pcap -s0 -i _interface_:nnn

where the _interface_ can be:
    •  1.1 - example of an physical interface
    • dmz_vlan - a name you gave to your vlan when created
    • 0.0 - is the equivalent of 'any' interface what means capture on all interfaces and all vlans
My favourite syntax is usually something like this:

tcpdump -s0 -nn -w /var/tmp/test1-$(date +%s).pcap -i 0.0:nnn '(host _ip_ and port _port_ ) or arp or not ip' 
  • Open the capture in wireshark as normal
Once you open you will noticed that there is additional section in the packet details.

  • The most useful part of using this plugin is that you can quickly and easily find the client and server site traffic in the capture (It can be a challenging when you have multiple tcp streams and OneConnect profile):
    • Find a single packet of the flow you are interested in (search for VIP or client ip for example).
    • Find the "Flow ID" from the F5 Ethernet trailer (see the picture above for example).
    • Click with right mouse taste on the Flow ID field and select "Prepare as Filter".
    • In the Filter box (on top ) it will pre-populate the syntax for you.
    • Copy the hex value and delete the '.flowid == hex' part and start typing '.'  (dot).
    • It will mediately give you a list of possible options, select anyflowid and copy the hex back as it was originally. Example:
The original filter         : f5ethtrailer.flowid == 0x0d2e6dc0
Filter after modifications  : f5ethtrailer.anyflowid == 0x0d2e6dc0
    • Press Apply button
This filter is gong to find the client and server site flows for you. You can then analyse them packet by packet to find out and understand how and why LTM load balance it to one or another pool member.

References

https://devcentral.f5.com/wiki/advdesignconfig.F5WiresharkPlugin.ashx
https://devcentral.f5.com/questions/tcpdump-with-multiple-pool-members
SOL13637: Capturing internal TMM information with tcpdump

Sunday, January 12, 2014

DES and IDEA ciphers are deprecated in the latest TLS protocol

When incorporating security into your solution and applications it is important to maintain a high level view and follow security best practices. That means you need a FW. The FW should have a DMZ and Inside segments. To actively protect your web applications you can deploy WAF or another kind of IPS. To passively monitor traffic you can implement IDS additionally.

But as our solution is being extend by new and more sophisticated network devices it is still important to understand and maintain the low level security parameters for the network protocols. When I mean low level I mean the low level details of the TLS/SSL network protocols that are being used when using HTTPS for example.

Problem

Is that secure or recommended to enable and support DES or IDEA ciphers in application or SSL-offloading load balancers?

Analysis and results discussion

According to RFC 5469 IDEA and DES should not be used any more. The reasons are listed in the RFC.

To verify if your server responds to clients using these ciphers you can try:
 
# (1)
# openssl s_client -connect server_ip:443 -cipher DES-CBC-SHA -ssl3
# or
# (2)
# openssl s_client -connect server_ip:443 -cipher DES-CBC-SHA -tls1
# or 
# (3)
# openssl s_client -connect server_ip:443 -cipher DES-CBC-SHA 
CONNECTED(00000003)
depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd
verify error:num=18:self signed certificate
verify return:1
depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd
verify return:1
---
Certificate chain
 0 s:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd
   i:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd
---
Server certificate
-----BEGIN CERTIFICATE-----
MIICATCCAWoCCQCxkFtlc6Bd0TANBgkqhkiG9w0BAQUFADBFMQswCQYDVQQGEwJB
VTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UECgwYSW50ZXJuZXQgV2lkZ2l0
cyBQdHkgTHRkMB4XDTE0MDExMjIxMzMwN1oXDTE1MDExMjIxMzMwN1owRTELMAkG
A1UEBhMCQVUxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoMGEludGVybmV0
IFdpZGdpdHMgUHR5IEx0ZDCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAtVtD
BfrmHU/T9m4xvvlP7+J4zJ2BYFY8QfSvQ1tyQw+BwvPyh9zyzgd0Zw4iOa6ThlQ3
GTr7e3FMQooMWpK0XXTYKbbWGqyVfnkcwmWjapJxOv8OaXlDS5TIc7MursFXp16e
oOjvpyuddX2gilQLiO6n1b6vyKsFfPW0eoPPmf8CAwEAATANBgkqhkiG9w0BAQUF
AAOBgQBGd8xD6ZINxy8Vf1jFrX+4EyPEL3+DkAU4lInd83kIuDd8i2fzia4YOfKh
JB3/ML8kLGLMh6R0WpHbaoGQvNM5qn7GdFL+DDBvXqlyZtIrfKamx+s5GxUiP0SV
5miO9Oh1mkxhXUqaVHaJR0DeTYEAuA0dc1lMoJlPoSMedlgJBg==
-----END CERTIFICATE-----
subject=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd
issuer=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd
---
No client certificate CA names sent
---
SSL handshake has read 710 bytes and written 273 bytes
---
New, TLSv1/SSLv3, Cipher is DES-CBC-SHA
Server public key is 1024 bit
Secure Renegotiation IS supported
Compression: zlib compression
Expansion: zlib compression
SSL-Session:
    Protocol  : SSLv3
    Cipher    : DES-CBC-SHA
    Session-ID: A5568C18EFB2DA77B729A247EA8E605BEBC4DF478129357D002C26DFA89F96C7
    Session-ID-ctx:
    Master-Key: F9CDF6CD91F3E4F5117758104906C779E18493062397EFFE7E4C518F0894398A01D969D5EE07804ED436A24444CD92FA
    Key-Arg   : None
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    Compression: 1 (zlib compression)
    Start Time: 1389565902
    Timeout   : 7200 (sec)
    Verify return code: 18 (self signed certificate)
---

An example output from the (3) showing that the server supports the legacy and depreciated ciphers.
 
# ssldump -A -n -i lo port 443
New TCP connection #1: 127.0.0.1(50211) <-> 127.0.0.1(443)
1 1  0.0007 (0.0007)  C>SV3.1(59)  Handshake
      ClientHello
        Version 3.2
        random[32]=
          52 d3 18 25 a9 86 1c 58 ff f0 90 ca fe ba f8 eb
          c8 23 46 fd 5b 7a 4a aa 51 c2 37 40 6a 8b dc 01
        cipher suites
        TLS_RSA_WITH_DES_CBC_SHA
        Unknown value 0xff
        compression methods
                unknown value
                  NULL
1 2  0.0010 (0.0003)  S>CV3.2(58)  Handshake
      ServerHello
        Version 3.2
        random[32]=
          52 d3 18 25 95 9c 3e 34 80 d8 00 3d fe 02 8f bf
          3c 1a 72 5d d1 4f 30 8c 6c 3b fa 64 0e 82 1c 6c
        session_id[0]=

        cipherSuite         TLS_RSA_WITH_DES_CBC_SHA
        compressionMethod                 unknown value
1 3  0.0021 (0.0011)  S>CV3.2(527)  Handshake
      Certificate
        certificate[517]=
          30 82 02 01 30 82 01 6a 02 09 00 b1 90 5b 65 73
          a0 5d d1 30 0d 06 09 2a 86 48 86 f7 0d 01 01 05
          05 00 30 45 31 0b 30 09 06 03 55 04 06 13 02 41
          55 31 13 30 11 06 03 55 04 08 0c 0a 53 6f 6d 65
          2d 53 74 61 74 65 31 21 30 1f 06 03 55 04 0a 0c
          18 49 6e 74 65 72 6e 65 74 20 57 69 64 67 69 74
          73 20 50 74 79 20 4c 74 64 30 1e 17 0d 31 34 30
          31 31 32 32 31 33 33 30 37 5a 17 0d 31 35 30 31
          31 32 32 31 33 33 30 37 5a 30 45 31 0b 30 09 06
          03 55 04 06 13 02 41 55 31 13 30 11 06 03 55 04
          08 0c 0a 53 6f 6d 65 2d 53 74 61 74 65 31 21 30
          1f 06 03 55 04 0a 0c 18 49 6e 74 65 72 6e 65 74
          20 57 69 64 67 69 74 73 20 50 74 79 20 4c 74 64
          30 81 9f 30 0d 06 09 2a 86 48 86 f7 0d 01 01 01
          05 00 03 81 8d 00 30 81 89 02 81 81 00 b5 5b 43
          05 fa e6 1d 4f d3 f6 6e 31 be f9 4f ef e2 78 cc
          9d 81 60 56 3c 41 f4 af 43 5b 72 43 0f 81 c2 f3
          f2 87 dc f2 ce 07 74 67 0e 22 39 ae 93 86 54 37
          19 3a fb 7b 71 4c 42 8a 0c 5a 92 b4 5d 74 d8 29
          b6 d6 1a ac 95 7e 79 1c c2 65 a3 6a 92 71 3a ff
          0e 69 79 43 4b 94 c8 73 b3 2e ae c1 57 a7 5e 9e
          a0 e8 ef a7 2b 9d 75 7d a0 8a 54 0b 88 ee a7 d5
          be af c8 ab 05 7c f5 b4 7a 83 cf 99 ff 02 03 01
          00 01 30 0d 06 09 2a 86 48 86 f7 0d 01 01 05 05
          00 03 81 81 00 46 77 cc 43 e9 92 0d c7 2f 15 7f
          58 c5 ad 7f b8 13 23 c4 2f 7f 83 90 05 38 94 89
          dd f3 79 08 b8 37 7c 8b 67 f3 89 ae 18 39 f2 a1
          24 1d ff 30 bf 24 2c 62 cc 87 a4 74 5a 91 db 6a
          81 90 bc d3 39 aa 7e c6 74 52 fe 0c 30 6f 5e a9
          72 66 d2 2b 7c a6 a6 c7 eb 39 1b 15 22 3f 44 95
          e6 68 8e f4 e8 75 9a 4c 61 5d 4a 9a 54 76 89 47
          40 de 4d 81 00 b8 0d 1d 73 59 4c a0 99 4f a1 23
          1e 76 58 09 06
1 4  0.0021 (0.0000)  S>CV3.2(4)  Handshake
      ServerHelloDone
1 5  0.0085 (0.0063)  C>SV3.2(134)  Handshake
      ClientKeyExchange
        EncryptedPreMasterSecret[128]=
          71 83 c8 f4 af ab be 5e a6 e0 ec 06 ab 14 be e3
          41 25 5f f9 9e b3 29 a1 a5 1a a9 25 8d c8 1e 3d
          f2 06 3b 50 68 58 ca 1b bf 9b 1a e5 3f 4d c7 f5
          43 67 93 a1 fc f8 16 9e 35 24 7f a6 4c ad 9b 0f
          c4 db 6e a8 3d 97 5e 5f 96 0f 40 7b a3 42 62 e4
          7c 07 f9 65 97 a4 52 1a 30 cc 11 d6 43 06 7d 85
          4b e9 d5 1e 2e af 9a bd 90 cd 4d 6e aa 9e 00 29
          07 12 cd 96 bd 59 ca 5c dc a3 88 00 53 6e 8f ec
1 6  0.0085 (0.0000)  C>SV3.2(1)  ChangeCipherSpec
1 7  0.0085 (0.0000)  C>SV3.2(56)  Handshake
1 8  0.0099 (0.0014)  S>CV3.2(170)  Handshake
      TLS_RSA_WITH_RC4_128_MD51 9  0.0476 (0.0376)  S>CV3.2(1)  ChangeCipherSpec
1 10 0.0476 (0.0000)  S>CV3.2(56)  Handshake
1    0.7913 (0.7436)  C>S  TCP FIN
1    0.7917 (0.0004)  S>C  TCP FIN

Output proving the ciphers are not supported.
 
# ssldump -A -n -i eth0 port 443 and host 31.222.129.61
New TCP connection #1: 162.13.0.27(34228) <-> 31.222.129.61(443)
1 1  0.0017 (0.0017)  C>SV3.1(59)  Handshake
      ClientHello
        Version 3.2
        random[32]=
          52 d3 19 53 c5 78 4c 06 8c e7 fc 47 a1 92 ec a4
          90 63 ca a2 6e a5 7e 58 bb 72 9b a1 be c1 84 3a
        cipher suites
        TLS_RSA_WITH_DES_CBC_SHA
        Unknown value 0xff
        compression methods
                unknown value
                  NULL
1 2  0.0021 (0.0003)  S>CV3.1(2)  Alert
    level           fatal
    value           handshake_failure
1    0.0021 (0.0000)  S>C  TCP FIN
1    0.0044 (0.0022)  C>S  TCP FIN

 
# openssl s_client -connect 31.222.129.61:443 -state -msg -cipher DES-CBC-SHA
CONNECTED(00000003)
SSL_connect:before/connect initialization
>>> TLS 1.1  [length 003b]
    01 00 00 37 03 02 52 d3 19 53 c5 78 4c 06 8c e7
    fc 47 a1 92 ec a4 90 63 ca a2 6e a5 7e 58 bb 72
    9b a1 be c1 84 3a 00 00 04 00 09 00 ff 02 01 00
    00 09 00 23 00 00 00 0f 00 01 01
SSL_connect:unknown state
SSL3 alert read:fatal:handshake failure
<<< TLS 1.0 Alert [length 0002], fatal handshake_failure
    02 28
SSL_connect:error in unknown state
139646822749888:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure:s23_clnt.c:741:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 64 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
---

References

DES and IDEA Cipher Suites for Transport Layer Security (TLS)
http://www.ietf.org/rfc/rfc5469.txt

The TLS Protocol, Version 1.0
http://www.ietf.org/rfc/rfc2246.txt

The Transport Layer Security (TLS) Protocol, Version 1.2
http://tools.ietf.org/html/rfc5246.txt

Monday, January 6, 2014

ASA performance troubleshooting tips

This is more a work in progress. Below are couple of tips and ideas how to deal with high traffic performance issues.

Limit connection per IP

Often a load can be generated from unique single (or a group of IPs). To limit the number of connection.

access-list http_conn_limit extended permit tcp any any eq www 
! access-list http_conn_limit extended permit tcp any any eq https
! you can add any other ACL to catch the intresting traffic 

class-map http_conn_limit_class
 match access-list http_conn_limit

policy-map http_conn_limit_map
 class http_conn_limit_class
  set connection per-client-max 100 

service-policy global_policy global
service-policy http_conn_limit_map interface outside

Reference:
http://rtomaszewski.blogspot.co.uk/2013/12/cisco-asa-connection-table-state.html
http://www.itlibrary.net/index.php/cisco-asa/8-limiting-connections-rate-for-traffic-destined-on-port-80
http://www.cisco.com/en/US/docs/security/asa/asa72/configuration/guide/mpc.html
http://blog.ine.com/2009/04/19/understanding-modular-policy-framework/

Kick off a client sessions

If you identify a client that you want to deny traffic and close all its connections.

access-list 101 extended deny ip host [ip] any
shun [ip]
no shun [ip]

Sunday, December 29, 2013

Cisco ASA connection table state description and examples

On ASA in the connection table you can find protocol sessions (TCP, UDP, ICMP and others) that describe the state of the session (like TCP/IP) when the command was run.

In the session you can find all currently managed sessions by the ASA. From this output you can understand as well as from what IPs your clients are coming from and to what services they connect.

Session statutes
 
fw-asa# sh conn  

Flags: A - awaiting inside ACK to SYN, a - awaiting outside ACK to SYN,
       B - initial SYN from outside, C - CTIQBE media, D - DNS, d - dump,
       E - outside back connection, F - outside FIN, f - inside FIN,
       G - group, g - MGCP, H - H.323, h - H.225.0, I - inbound data,
       i - incomplete, J - GTP, j - GTP data, K - GTP t3-response
       k - Skinny media, M - SMTP data, m - SIP media, n - GUP
       O - outbound data, P - inside back connection, p - Phone-proxy TFTP connection,
       q - SQL*Net data, R - outside acknowledged FIN,
       R - UDP SUNRPC, r - inside acknowledged FIN, S - awaiting inside SYN,
       s - awaiting outside SYN, T - SIP, t - SIP transient, U - up,
       V - VPN orphan, W - WAAS,
       X - inspected by service module

Example flags meaning from the session entities
 
UB
U - up,
B - initial SYN from outside,

UO
U - up,
O - outbound data,

UIB
U - up,
I - inbound data,
B - initial SYN from outside,

UIOB
U - up,
I - inbound data,
O - outbound data,
B - initial SYN from outside,

UfIB
U - up,
f - inside FIN,
I - inbound data,
B - initial SYN from outside,

UfrO
U - up,
f - inside FIN,
r - inside acknowledged FIN,
O - outbound data,

UfIOB 
U - up,
f - inside FIN,
I - inbound data,
O - outbound data,
B - initial SYN from outside,

UfFIOB
 the same like UfIOB 
 F - outside FIN,

UfFRIOB
the same like UfFIOB
R - UDP SUNRPC,

UfrIOB
U - up,
f - inside FIN,
r - inside acknowledged FIN
I - inbound data,
O - outbound data,
B - initial SYN from outside,

SaAB
S - awaiting inside SYN,
a - awaiting outside ACK to SYN,
A - awaiting inside ACK to SYN, 
B - initial SYN from outside,

aB
a - awaiting outside ACK to SYN,
B - initial SYN from outside,

Example flow you can find in the ASA firewall connection table

Usually a lot entries with these state.
 
fw-asa# sh conn detail long

flags UfIOB TCP outside:1.165.177.125/1965 (1.165.177.125/1965) inside:192.168.55.172/4567 (1.2.157.172/4567), flags UfIOB, idle 52m38s, uptime 54m21s, timeout 1h0m, bytes 3063
flags UfIOB TCP outside:1.172.130.64/1485 (1.172.130.64/1485) inside:192.168.55.172/4567 (1.2.157.172/4567), flags UfIOB, idle 41m38s, uptime 43m12s, timeout 1h0m, bytes 3063

flags UB TCP outside:1.189.22.195/16208 (1.189.22.195/16208) inside:192.168.55.172/4567 (1.2.157.172/4567), flags UB, idle 45m6s, uptime 48m17s, timeout 1h0m, bytes 0
flags UB TCP outside:1.56.45.22/24654 (1.56.45.22/24654) inside:192.168.55.172/4567 (1.2.157.172/4567), flags UB, idle 45m54s, uptime 49m4s, timeout 1h0m, bytes 0

Common but less frequent state
 
flags UfFIOB TCP outside:1.55.216.14/14104 (1.55.216.14/14104) inside:192.168.55.172/4567 (1.2.157.172/4567), flags UfFIOB, idle 41m51s, uptime 43m24s, timeout 1h0m, bytes 3002
flags UfFIOB TCP outside:110.81.84.50/20230 (110.81.84.50/20230) inside:192.168.55.172/4567 (1.2.157.172/4567), flags UfFIOB, idle 52m55s, uptime 54m28s, timeout 1h0m, bytes 3063

flags UfFRIOB TCP outside:109.109.38.148/4760 (109.109.38.148/4760) inside:192.168.55.172/4567 (1.2.157.172/4567), flags UfFRIOB, idle 3s, uptime 15s, timeout 5m0s, bytes 2261
flags UfFRIOB TCP outside:112.12.221.155/3753 (112.12.221.155/3753) inside:192.168.55.172/4567 (1.2.157.172/4567), flags UfFRIOB, idle 0s, uptime 0s, timeout 5m0s, bytes 1008

flags UfIB TCP outside:121.35.47.128/1481 (121.35.47.128/1481) inside:192.168.55.172/4567 (1.2.157.172/4567), flags UfIB, idle 23m54s, uptime 26m28s, timeout 1h0m, bytes 1106
flags UfIB TCP outside:183.11.2.56/4589 (183.11.2.56/4589) inside:192.168.55.172/4567 (1.2.157.172/4567), flags UfIB, idle 47m15s, uptime 49m48s, timeout 1h0m, bytes 1106

flags SaAB TCP outside:112.72.135.224/7494 (112.72.135.224/7494) inside:192.168.55.172/4567 (1.2.157.172/4567), flags SaAB, idle 0s, uptime 0s, timeout 1m0s, bytes 0
flags SaAB TCP outside:113.170.107.218/4472 (113.170.107.218/4472) inside:192.168.55.172/4567 (1.2.157.172/4567), flags SaAB, idle 0s, uptime 0s, timeout 1m0s, bytes 0

flags UfrO TCP outside:202.168.215.226/80 (202.168.215.226/80) inside:192.168.55.172/3845 (1.2.157.172/3845), flags UfrO, idle 6s, uptime 8s, timeout 10m0s, bytes 1182

flags UIOB TCP outside:61.187.244.179/9571 (61.187.244.179/9571) inside:192.168.55.172/4567 (1.2.157.172/4567), flags UIOB, idle 38m13s, uptime 39m46s, timeout 1h0m, bytes 2897
flags UIOB TCP outside:67.47.251.34/14921 (67.47.251.34/14921) inside:192.168.55.172/4567 (1.2.157.172/4567), flags UIOB, idle 48m14s, uptime 49m50s, timeout 1h0m, bytes 3348

TCP outside:1.2.27.69/49856 (1.2.27.69/49856) FW-INSIDE:192.168.100.112/80 (11.22.192.112/80), flags UIB, idle 0s, uptime 0s, timeout 1h0m, bytes 581

flags UO TCP outside:202.168.215.226/80 (202.168.215.226/80) inside:192.168.55.172/3848 (1.2.157.172/3848), flags UO, idle 7s, uptime 7s, timeout 1h0m, bytes 1182

TCP outside:220.135.240.219/61139 (220.135.240.219/61139) inside:192.168.55.172/4567 (1.2.157.172/4567), flags aB, idle 0s, uptime 0s, timeout 1m0s, bytes 0
TCP outside:220.135.240.219/61138 (220.135.240.219/61138) inside:192.168.55.172/4567 (1.2.157.172/4567), flags aB, idle 0s, uptime 0s, timeout 1m0s, bytes 0

# without the 'long' parameter
TCP outside 94.5.94.11:59458 FW-DMZ-LB 192.168.67.79:80, idle 0:04:31, bytes 19424, flags UfrIOB
TCP outside 94.5.94.11:59463 FW-DMZ-LB 192.168.67.72:80, idle 0:04:05, bytes 7181, flags UfrIOB

You can specify additional parameters to filter output for specific connection entries state.
 
fw-asa# sh conn detail  long state tcp_embryonic all

TCP outside:220.135.240.219/61139 (220.135.240.219/61139) inside:192.168.55.172/4567 (1.2.157.172/4567), flags aB, idle 0s, uptime 0s, timeout 1m0s, bytes 0
TCP outside:220.135.240.219/61138 (220.135.240.219/61138) inside:192.168.55.172/4567 (1.2.157.172/4567), flags aB, idle 0s, uptime 0s, timeout 1m0s, bytes 0

fw-asa# sh conn long state data_out

TCP outside:112.65.211.244/6680 (112.65.211.244/6680) inside:192.168.55.172/4567 (1.2.157.172/4567), flags UIOB, idle 0s, uptime 3m48s, timeout 1h0m, bytes 72509
TCP outside:113.247.3.129/3253 (113.247.3.129/3253) inside:192.168.55.172/4567 (1.2.157.172/4567), flags UIOB, idle 1s, uptime 6m12s, timeout 1h0m, bytes 139249
TCP outside:2.176.137.197/1950 (2.176.137.197/1950) inside:192.168.55.172/4567 (1.2.157.172/4567), flags UfIOB, idle 5m37s, uptime 7m14s, timeout 1h0m, bytes 3002
TCP outside:171.118.104.53/64054 (171.118.104.53/64054) inside:192.168.55.172/80 (1.2.157.172/80), flags UIOB, idle 8s, uptime 7m27s, timeout 1h0m, bytes 98878
TCP outside:219.139.32.90/4141 (219.139.32.90/4141) inside:192.168.55.172/80 (1.2.157.172/80), flags UIOB, idle 7s, uptime 7m32s, timeout 1h0m, bytes 94113

fw-asa# sh conn long state data_in

TCP outside:112.65.211.244/6680 (112.65.211.244/6680) inside:192.168.55.172/4567 (1.2.157.172/4567), flags UIOB, idle 4s, uptime 3m37s, timeout 1h0m, bytes 44907
TCP outside:113.247.3.129/3253 (113.247.3.129/3253) inside:192.168.55.172/4567 (1.2.157.172/4567), flags UIOB, idle 1s, uptime 6m1s, timeout 1h0m, bytes 137801

fw-asa# sh conn long state finin

TCP outside:138.91.170.208/1264 (138.91.170.208/1264) inside:192.168.55.172/80 (1.2.157.172/80), flags UfFRIOB, idle 0s, uptime 0s, timeout 5m0s, bytes 5052
TCP outside:2.176.137.197/1950 (2.176.137.197/1950) inside:192.168.55.172/4567 (1.2.157.172/4567), flags UfIOB, idle 4m45s, uptime 6m21s, timeout 1h0m, bytes 3002
TCP outside:2.176.137.197/1653 (2.176.137.197/1653) inside:192.168.55.172/4567 (1.2.157.172/4567), flags UfIOB, idle 5m6s, uptime 6m43s, timeout 1h0m, bytes 3002

fw-asa# sh conn long state up

TCP outside:112.65.211.244/6680 (112.65.211.244/6680) inside:192.168.55.172/4567 (1.2.157.172/4567), flags UIOB, idle 0s, uptime 2m50s, timeout 1h0m, bytes 37914
TCP outside:113.247.3.129/3253 (113.247.3.129/3253) inside:192.168.55.172/4567 (1.2.157.172/4567), flags UIOB, idle 4s, uptime 5m14s, timeout 1h0m, bytes 78789
TCP outside:2.176.137.197/1950 (2.176.137.197/1950) inside:192.168.55.172/4567 (1.2.157.172/4567), flags UfIOB, idle 4m39s, uptime 6m16s, timeout 1h0m, bytes 3002
TCP outside:171.118.104.53/64054 (171.118.104.53/64054) inside:192.168.55.172/80 (1.2.157.172/80), flags UIOB, idle 0s, uptime 6m29s, timeout 1h0m, bytes 89118
TCP outside:219.139.32.90/4141 (219.139.32.90/4141) inside:192.168.55.172/80 (1.2.157.172/80), flags UIOB, idle 9s, uptime 6m35s, timeout 1h0m, bytes 82689
TCP outside:2.176.137.197/1653 (2.176.137.197/1653) inside:192.168.55.172/4567 (1.2.157.172/4567), flags UfIOB, idle 5m1s, uptime 6m38s, timeout 1h0m, bytes 3002
TCP outside:2.176.137.197/1589 (2.176.137.197/1589) inside:192.168.55.172/4567 (1.2.157.172/4567), flags UfIOB, idle 5m13s, uptime 6m49s, timeout 1h0m, bytes 3002

Thursday, September 26, 2013

How to save browser session in Chrome for offline analysis

There are commercial tools ( a small example list) that help you to save a browser session that it can be view later. Usually you want to do this when you troubleshooting an HTTP issue where the browser is involved.

Below is a combination of free tools that can save and open a browser session.

Capture browsers session
  • To capture and save the the browser session can use Chrome extension called Chrome Developer Tools. After installation
    • You can activate it in your active Chrome browser window with the shortcat: Ctrl+Shift+I. It will open a new panel at the bottom of your screen.  
    • Navigate to the Network tab
    • Now you can browse your site(s) and you should see all the requests your browser does.
    • Once you finish please click with the right mouse button on any of requests and select from the menu "Save as HAR with content" to save an HAR file on the disk.
Open a browser session for offline analysis 
  • You can send a HAR file to any other person for offline analysis.
  • An example application that can read and display it is Fiddler.
  • From Fiddler menu open the HAR file by navigating to File->Import->HTTPArchive
References

http://fiddler2.com/Features/web-session-manipulation
http://neverblog.net/save-and-share-results-from-chromes-developer-tools-network-panel/
http://blog.chromium.org/2011/02/chrome-developer-tools-back-to-basics.html

Wednesday, May 29, 2013

How to extract a single SSL connection from tcpdump

There is not better tool for SSL troubleshooting than ssldump (a very useful how to use in a form of F5 solution can be found here: SOL10209: Overview of packet tracing with the ssldump utility.

The ssldump tool is not perfect although. It can produce only text output. The output is a mixture of SSL handshaking requests and data connections.

This little tool https://github.com/akozadaev/ssld-extract can help to extract a single SSL session. An example usage is provided below.
root@server:~/ssld-extract/# ssldump -n -r example1.pcap  > example1.pcap.txt
root@server:~/ssld-extract/pp# python ssld-extract.py -c -n1 ~/ssld-extract/example1.pcap.txt
New TCP connection #1: 192.168.0.2(57122) <-> 72.26.232.202(443)
1 1  0.1946 (0.1946)  C>S  Handshake
      ClientHello
        Version 3.1
        resume [32]=
          7b 9a 08 2f 3f c0 5e 70 c8 9e b6 f8 61 a0 4e 9e
          d9 84 07 e5 94 13 f8 e8 87 33 96 0d f4 a4 9f 6a
        cipher suites
        Unknown value 0xc00a
        Unknown value 0xc014
        Unknown value 0x88
        Unknown value 0x87
        TLS_DHE_RSA_WITH_AES_256_CBC_SHA
        TLS_DHE_DSS_WITH_AES_256_CBC_SHA
        Unknown value 0xc012
        TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA
        TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA
...
        compression methods
                  NULL
1 2  0.3973 (0.2027)  S>C  Handshake
      ServerHello
        Version 3.1
        session_id[32]=
          d4 65 5e b6 3d 33 88 8c bd 7e 56 65 13 71 9f 52
          30 47 ea e1 c0 d6 1f 72 12 b9 2f 8f 6b 42 b2 68
        cipherSuite         TLS_RSA_WITH_RC4_128_SHA
        compressionMethod                   NULL
1 3  0.3974 (0.0001)  S>C  Handshake
      Certificate
1 4  0.3974 (0.0000)  S>C  Handshake
      ServerHelloDone
1 5  0.4006 (0.0031)  C>S  Handshake
      ClientKeyExchange
1 6  0.4006 (0.0000)  C>S  ChangeCipherSpec
1 7  0.4006 (0.0000)  C>S  Handshake
1 8  0.5794 (0.1788)  S>C  ChangeCipherSpec
1 9  0.5794 (0.0000)  S>C  Handshake
1 10 0.5814 (0.0019)  C>S  application_data
1 11 0.5819 (0.0004)  C>S  application_data
1 12 0.7806 (0.1987)  S>C  application_data
As you can see it was able to extract the single connection what is a huge help if you need to analyze a big tcpdump file.

Monday, March 25, 2013

Example of a failing DNS request

I've run into an DNS issue today. I usually scan quickly the answer from dig as I'm interested in the actual A, PTR or MX only. Today the issue was different. Below in the two failing example DNS requests please note the SERVFAIL status code.

Example 1
 
dig @194.2.2.2 www.example.com A

; DiG 9.3.6-P1-RedHat-9.3.6-16.P1.el5 <<>> @194.2.2.2 www.example.com A
; (1 server found)
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 5995
;; flags: qr rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;www.example.com. IN A

;; ANSWER SECTION:
www.example.com. 86400 IN CNAME buuu.example.com.

Example 2
 
dig @194.2.2.2 www.example.com

; DiG 9.3.6-P1-RedHat-9.3.6-16.P1.el5 <<>> @83.138.151.80 www.example.com
; (1 server found)
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 55778
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;www.example.com. IN A

;; Query time: 12 msec
;; SERVER: 194.2.2.2#53(194.2.2.2)
;; WHEN: Mon Mar 25 17:41:28 2013
;; MSG SIZE  rcvd: 52

References
  1. http://networking.ringofsaturn.com/Unix/dnstroubleshooting.php

Monday, March 11, 2013

ASA ssh login problem

Working for ISP is big fun. From all the work you do there is one routine like swapping of network devices (for example Cisco ASA firewall) that you are going to do. Not going into too much details the process is straight forward and requires:
  • copy the config to new device
  • rack the new device
  • make sure that the switches and VLANs are configured properly
  • change routing info if needed 
  Problem

After putting new ASA FW into rack you can connect using serial line but you can't access it over SSH. You getting this error message.
 
$ ssh 1.1.1.77
ssh_exchange_identification: Connection closed by remote host

Troubleshooting and solution

From serial console access enable debugging:
 
# debug ssh

Connect over ssh. You are going to see this logs on console:
 
Device ssh opened successfully.
SSH0: SSH client: IP = '212.100.225.42'  interface # = 2
SSH: unable to retrieve default host public key.  Please create a defauth RSA key pair before using SSH
SSH0: Session disconnected by SSH server - error 0x00 "Internal error"

Searching for 'unable to retrieve default host public key' finds the links in reference sections.  To fix this we need:
 
fw-asa(config)# crypto key generate rsa
INFO: The name for the keys will be: 
Keypair generation process begin. Please wait...

Once ASA has its own RSA key to use for SSH handshaking the logs from a sucessful SSH session looks like:
 
fw-asa# 
Device ssh opened successfully.
SSH0: SSH client: IP = '212.100.225.42'  interface # = 2
SSH: host key initialised
SSH: license supports 3DES: 2
SSH: license supports DES: 2
SSH0: starting SSH control process
SSH0: Exchanging versions - SSH-2.0-Cisco-1.25
SSH0: send SSH message: outdata is NULL
server version string:SSH-2.0-Cisco-1.25SSH0: receive SSH message: 83 (83)
SSH0: client version is - SSH-2.0-OpenSSH_4.3
client version string:SSH-2.0-OpenSSH_4.3SSH0: begin server key generation
SSH0: complete server key generation, elapsed time = 1830 ms
SSH2 0: SSH2_MSG_KEXINIT sent
SSH2 0: SSH2_MSG_KEXINIT received
SSH2: kex: client->server aes128-cbc hmac-md5 none
SSH2: kex: server->client aes128-cbc hmac-md5 none
SSH2 0: expecting SSH2_MSG_KEXDH_INIT
SSH2 0: SSH2_MSG_KEXDH_INIT received
SSH2 0: signature length 143
SSH2: kex_derive_keys complete
SSH2 0: newkeys: mode 1
SSH2 0: SSH2_MSG_NEWKEYS sent
SSH2 0: waiting for SSH2_MSG_NEWKEYSSSH0: TCP read failed, error code = 0x86300003 "TCP connection closed"
SSH0: receive SSH message: [no message ID: variable *data is NULL]

SSH2 0: Unexpected mesg type receivedSSH0: Session disconnected by SSH server - error 0x00 "Internal error"

References
  1. http://www.myteneo.net/blog/-/blogs/accessing-cisco-asa-using-ssh/
  2. http://ciscotalk.wordpress.com/2011/08/31/enabling-ssh-on-a-cisco-asa/

Thursday, January 31, 2013

How to inspect HTTP headers for GET request in Chrome


When troubleshooting HTTP protocol it is required to inspection and verify various HTTP headers for the sending requests and receiving response.


Problem

When in Chrome navigating to http://example.com/mysite.html URL how to see all headers.

Solution and demonstration

There are very few extensions that allow you to inspect the request headers  Most of them give you access to the response data only. The Developer tools is the only one extension I found that shows request and response headers in Chrome.

This is how you can start and test it:

References
  1. https://developers.google.com/chrome-developer-tools/
  2. http://stackoverflow.com/questions/4423061/view-http-headers-in-google-chrome 
  3. http://blog.ashfame.com/2010/05/view-html-headers-firefox-google-chrome/

Tuesday, November 13, 2012

What happens to FTPS data channel if client closes control connection

There are couple of extensions added to standard FTP protocol to make it secure. It is important as in the default FTP configuration the control as well as the data channel use clear text to exchange commands or transmit data.

Problem

We assume that we were able to established a successful FTPS base session between a client and server. The client started a new data session to download a large file from the server or is uploading a file using the passive mode.

What happens to a file transfer if the control session is terminated by the client.

Troubleshooting

To verify the scenario we are going to setup a simple test scenario like in Does IPv4 based FTPS server supports EPSV FTP protocol extension blog [1].

As the curl client by default is not closing the control connections (what is a correct behavior that we will discuss at the end of this blog) we are going to use an active method to close an established tcp session described here How to forcibly kill an established TCP connection in Linux  [2].

Test #1: client download a large file

Client logs

Logs when the control connection is being closed and reseted

root@clinet:~# netstat -tulpan | grep curl
tcp        0      0 5.79.21.166:45707       5.79.17.48:8000         ESTABLISHED 5546/curl
tcp    64210      0 5.79.21.166:43796       5.79.17.48:8011         ESTABLISHED 5546/curl

root@clinet:~# ./killcx.pl 5.79.17.48:8011
killcx v1.0.3 - (c)2009-2011 Jerome Bruandet - http://killcx.sourceforge.net/

[PARENT] checking connection with [5.79.17.48:8011]
[PARENT] found connection with [5.79.21.166:43796] (ESTABLISHED)
[PARENT] forking child
[CHILD]  interface not defined, will use [eth0]
[CHILD]  setting up filter to sniff ACK on [eth0] for 5 seconds
[CHILD]  hooked ACK from [5.79.21.166:43796]
[CHILD]  found AckNum [1229126485] and SeqNum [3095306962]
[CHILD]  sending spoofed RST to [5.79.21.166:43796] with SeqNum [1229126485]
[CHILD]  sending RST to remote host as well with SeqNum [3095306962]
[CHILD]  all done, sending USR1 signal to parent [5781] and exiting
[PARENT] received child signal, checking results...
         => success : connection has been closed !

These are the client logs from the start of downloading until the control session is closed.

root@client:~# curl -v --limit-rate 10K -o file.txt -u rado:pass -k --ftp-ssl ftp://5.79.17.48:8000/c2900-universalk9-mz.SPA.152-1.T.bin
* About to connect() to 5.79.17.48 port 8000 (#0)
*   Trying 5.79.17.48...   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0connected
< 220-FileZilla Server version 0.9.41 beta
< 220-written by Tim Kosse (Tim.Kosse@gmx.de)
< 220 Please visit http://sourceforge.net/projects/filezilla/
> AUTH SSL
< 234 Using authentication type SSL
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
} [data not shown]
* SSLv3, TLS handshake, Server hello (2):
{ [data not shown]
* SSLv3, TLS handshake, CERT (11):
{ [data not shown]
* SSLv3, TLS handshake, Server finished (14):
{ [data not shown]
* SSLv3, TLS handshake, Client key exchange (16):
} [data not shown]
* SSLv3, TLS change cipher, Client hello (1):
} [data not shown]
* SSLv3, TLS handshake, Finished (20):
} [data not shown]
* SSLv3, TLS change cipher, Client hello (1):
{ [data not shown]
* SSLv3, TLS handshake, Finished (20):
{ [data not shown]
* SSL connection using AES256-SHA
* Server certificate:
*        subject: CN=www; C=11; ST=aaa; L=bbb; O=ddd; OU=aaa; emailAddress=a@a.com
*        start date: 2012-11-08 00:13:54 GMT
*        expire date: 2013-11-08 00:13:54 GMT
*        common name: www (does not match '5.79.17.48')
*        issuer: CN=www; C=11; ST=aaa; L=bbb; O=ddd; OU=aaa; emailAddress=a@a.com
*        SSL certificate verify result: self signed certificate (18), continuing anyway.
> USER rado
< 331 Password required for rado
> PASS pass
< 230 Logged on
> PBSZ 0
< 200 PBSZ=0
> PROT P
< 200 Protection level set to P
> PWD
< 257 "/" is current directory.
* Entry path is '/'
> EPSV
* Connect data stream passively
< 229 Entering Extended Passive Mode (|||8011|)
*   Trying 5.79.17.48... connected
* Connecting to 5.79.17.48 (5.79.17.48) port 8011
> TYPE I
< 200 Type set to I
> SIZE c2900-universalk9-mz.SPA.152-1.T.bin
< 213 77200652
> RETR c2900-universalk9-mz.SPA.152-1.T.bin
< 150 Connection accepted
* Doing the SSL/TLS handshake on the data stream
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSL re-using session ID
* SSLv3, TLS handshake, Client hello (1):
} [data not shown]
* SSLv3, TLS handshake, Server hello (2):
{ [data not shown]
* SSLv3, TLS change cipher, Client hello (1):
{ [data not shown]
* SSLv3, TLS handshake, Finished (20):
{ [data not shown]
* SSLv3, TLS change cipher, Client hello (1):
} [data not shown]
* SSLv3, TLS handshake, Finished (20):
} [data not shown]
* SSL connection using AES256-SHA
* Server certificate:
*        subject: CN=www; C=11; ST=aaa; L=bbb; O=ddd; OU=aaa; emailAddress=a@a.com
*        start date: 2012-11-08 00:13:54 GMT
*        expire date: 2013-11-08 00:13:54 GMT
*        common name: www (does not match '5.79.17.48')
*        issuer: CN=www; C=11; ST=aaa; L=bbb; O=ddd; OU=aaa; emailAddress=a@a.com
*        SSL certificate verify result: self signed certificate (18), continuing anyway.
* Maxdownload = -1
* Getting file with size: 77200652
{ [data not shown]
  0 73.6M    0  616k    0     0  10095      0  2:07:27  0:01:02  2:06:25  9753* SSL read: error:00000000:lib(0):func(0):reason(0), errno 104
  0 73.6M    0  620k    0     0  10160      0  2:06:38  0:01:02  2:05:36 11170
* Closing connection #0
* SSLv3, TLS alert, Client hello (1):
} [data not shown]
curl: (56) SSL read: error:00000000:lib(0):func(0):reason(0), errno 104

Server logs

As the file download started this is logged on the server.


After the client control connections is terminated the server logs '426 Connection closed' tranfer aborted' log message.


After about 3-5 seconds after the connections clears from the server logs.


Test #2: client upload a large file

Client logs

The client logs when control channel is terminated.

root@client:~# netstat -tulpan | grep curl
tcp        0      0 5.79.21.166:43489       5.79.17.48:8016         ESTABLISHED 13177/curl
tcp        0      0 5.79.21.166:45717       5.79.17.48:8000         ESTABLISHED 13177/curl

root@client:~# ./killcx.pl  5.79.17.48:8016 
killcx v1.0.3 - (c)2009-2011 Jerome Bruandet - http://killcx.sourceforge.net/

[PARENT] checking connection with [5.79.17.48:8016]
[PARENT] found connection with [5.79.21.166:43489] (ESTABLISHED)
[PARENT] forking child
[CHILD]  interface not defined, will use [eth0]
[CHILD]  setting up filter to sniff ACK on [eth0] for 5 seconds
[PARENT] sending spoofed SYN to [5.79.21.166:43489] with bogus SeqNum
[CHILD]  hooked ACK from [5.79.21.166:43489]
[CHILD]  found AckNum [781536832] and SeqNum [2094006657]
[CHILD]  sending spoofed RST to [5.79.21.166:43489] with SeqNum [781536832]
[CHILD]  sending RST to remote host as well with SeqNum [2094006657]
[CHILD]  all done, sending USR1 signal to parent [13547] and exiting
[PARENT] received child signal, checking results...
         => success : connection has been closed !

Curl logs when the upload starts and the control channel is terminated.

root@client:~# curl -v --limit-rate 10K -T file.txt -u rado:pass -k --ftp-ssl ftp://5.79.17.48:8000/
* About to connect() to 5.79.17.48 port 8000 (#0)
*   Trying 5.79.17.48...   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0connected
< 220-FileZilla Server version 0.9.41 beta
< 220-written by Tim Kosse (Tim.Kosse@gmx.de)
< 220 Please visit http://sourceforge.net/projects/filezilla/
> AUTH SSL
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0< 234 Using authentication type SSL
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
} [data not shown]
* SSLv3, TLS handshake, Server hello (2):
{ [data not shown]
* SSLv3, TLS handshake, CERT (11):
{ [data not shown]
* SSLv3, TLS handshake, Server finished (14):
{ [data not shown]
* SSLv3, TLS handshake, Client key exchange (16):
} [data not shown]
* SSLv3, TLS change cipher, Client hello (1):
} [data not shown]
* SSLv3, TLS handshake, Finished (20):
} [data not shown]
* SSLv3, TLS change cipher, Client hello (1):
{ [data not shown]
* SSLv3, TLS handshake, Finished (20):
{ [data not shown]
* SSL connection using AES256-SHA
* Server certificate:
*        subject: CN=www; C=11; ST=aaa; L=bbb; O=ddd; OU=aaa; emailAddress=a@a.com
*        start date: 2012-11-08 00:13:54 GMT
*        expire date: 2013-11-08 00:13:54 GMT
*        common name: www (does not match '5.79.17.48')
*        issuer: CN=www; C=11; ST=aaa; L=bbb; O=ddd; OU=aaa; emailAddress=a@a.com
*        SSL certificate verify result: self signed certificate (18), continuing anyway.
> USER rado
< 331 Password required for rado
> PASS pass
< 230 Logged on
> PBSZ 0
< 200 PBSZ=0
> PROT P
< 200 Protection level set to P
> PWD
< 257 "/" is current directory.
* Entry path is '/'
> EPSV
* Connect data stream passively
< 229 Entering Extended Passive Mode (|||8016|)
*   Trying 5.79.17.48... connected
* Connecting to 5.79.17.48 (5.79.17.48) port 8016
> TYPE I
< 200 Type set to I
> STOR file.txt
< 150 Connection accepted
* Doing the SSL/TLS handshake on the data stream
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSL re-using session ID
* SSLv3, TLS handshake, Client hello (1):
} [data not shown]
* SSLv3, TLS handshake, Server hello (2):
{ [data not shown]
* SSLv3, TLS change cipher, Client hello (1):
{ [data not shown]
* SSLv3, TLS handshake, Finished (20):
{ [data not shown]
* SSLv3, TLS change cipher, Client hello (1):
} [data not shown]
* SSLv3, TLS handshake, Finished (20):
} [data not shown]
* SSL connection using AES256-SHA
* Server certificate:
*        subject: CN=www; C=11; ST=aaa; L=bbb; O=ddd; OU=aaa; emailAddress=a@a.com
*        start date: 2012-11-08 00:13:54 GMT
*        expire date: 2013-11-08 00:13:54 GMT
*        common name: www (does not match '5.79.17.48')
*        issuer: CN=www; C=11; ST=aaa; L=bbb; O=ddd; OU=aaa; emailAddress=a@a.com
*        SSL certificate verify result: self signed certificate (18), continuing anyway.
} [data not shown]
  0 73.6M    0     0    0  688k      0  10122  2:07:07  0:01:09  2:05:58  9814* SSL_write() returned SYSCALL, errno = 10422:51:35
  0 73.6M    0     0    0  688k      0  10122  2:07:07  0:01:09  2:05:58  8177
* Closing connection #0
* SSLv3, TLS alert, Client hello (1):
} [data not shown]
curl: (55) SSL_write() returned SYSCALL, errno = 104

Server logs

When the upload starts and 1-3 seconds after the control channel is closed.




Results discussion

We can see that every time the client closes the TCP session used to host the control channel bad things happen to the upload or download process.

This is expected behavior and is documented in the relevant RFC documents:


http://tools.ietf.org/html/rfc4217
7. Data Connection Behaviour

http://tools.ietf.org/html/rfc959
3.2.  ESTABLISHING DATA CONNECTIONS

The server MUST close the data connection under the following conditions:

         1. The server has completed sending data in a transfer mode
            that requires a close to indicate EOF.

         2. The server receives an ABORT command from the user.

         3. The port specification is changed by a command from the
            user.

         4. The control connection is closed legally or otherwise.

         5. An irrecoverable error condition occurs.


References
  1. http://rtomaszewski.blogspot.co.uk/2012/11/does-ipv4-based-ftps-server-supports.html
  2. http://rtomaszewski.blogspot.co.uk/2012/11/how-to-forcibly-kill-established-tcp.html

Thursday, October 25, 2012

How to extract a duration of a tcp session from tcpdump file


I took a tcpdump to capture all my application connections to data base server. I can filter the tcpudmp data and extract the session that are relevant by using standard tcpdump filters.

Problem

How to find a duration of a tcp session without manually checking packets and calculating the elapsed time.

Solution

There are many tools that can read and understand a tcpudmp file. One of them is tcptrace. An  example of how to use it to find the time is demonstrated below.

root@db1:~# tcptrace -n -l -o1 
1 arg remaining, starting with 'google.pcap'
Ostermann's tcptrace -- version 6.6.7 -- Thu Nov  4, 2004

12 packets seen, 12 TCP packets traced
elapsed wallclock time: 0:00:00.001738, 6904 pkts/sec analyzed
trace file elapsed time: 0:00:07.092266
TCP connection info:
1 TCP connection traced:
TCP connection 1:
        host a:        2a00:1a48:7805:0111:8cfc:cf10:ff08:0a2f:55939
        host b:        2a00:1450:400c:0c05::0063:80
        complete conn: yes
        first packet:  Wed Oct 24 22:49:59.166611 2012
        last packet:   Wed Oct 24 22:50:06.258878 2012
        elapsed time:  0:00:07.092266
        total packets: 12
        filename:      google.pcap
   a->b:                              b->a:
     total packets:             6           total packets:             6
     ack pkts sent:             5           ack pkts sent:             6
     ...

References
  1. http://www.tcptrace.org/manual/node11_tf.html
  2. http://docstore.mik.ua/orelly/networking_2ndEd/tshoot/ch05_05.htm
  3. http://www.noah.org/wiki/Packet_sniffing
  4. http://www.darknet.org.uk/2007/11/tcpflow-tcp-flow-recorder-for-protocol-analysis-and-debugging/
  5. http://danielmiessler.com/study/tcpdump/

Thursday, August 16, 2012

How to calculate a number of new SSL/TCP connections per every 10ms

Hardware load balancers like F5 are a graet product that offers a lot of featreus still combined with a simple and intuitive management GUI. The only problem is the price you have to pay to buy it and than further to pay the support and the license fees.

When working with F5 I have run once into an interesting SSL/TLS problem. It is documented and described SOL6475: Overview of SSL TPS licensing limits.

The most important part from the solutions is:

The BIG-IP system measures SSL TPS based on client-side connection attempts to any
virtual server configured with a Client SSL profile. SSL TPS is enforced across a
sliding time window. The BIG-IP system utilizes a 10ms window (1/100 of a second)
to calculate the current TPS. If the number of TPS requests within any 10ms window
exceeds 1/100 of the licensed TPS, an error message regarding the TPS limit being
reached is sent to the /var/log/ltm file.

Problem

How to know what clients IPs cause the error to be logged. How to measure and calculate the number of SSL connection per seconds for even 10ms.

Solution

As there are no tools on F5 that helps you to find this out I thought that a simple way to get some visibility of it would be to capture all TCP SYN packets hitting the LB and then later do some analysis of it. An implementation of this ideas in a form of a python script can be found here [1].

Demonstration

To test sslAnalyze.py script we need first a tcpdump file. For this purpose we can use the nmap command and run a SYN flood. For the desciption of the nmap options you can take a look here [2].

$ nmap -P0 -TNormal -D 1.2.3.4,1.2.3.5,1.2.3.6,1.2.3.7,1.2.3.8,1.2.3.9,1.2.3.10 -iR 10

All what we have to do now is to run on one session a tcpump and on the other the nmap command. As we are only interested in the TCP SYN packets we should tailor the tcpdump filtering syntax properly. A tcpdump that will capture only the SYN packets:

$ tcpdump -vvv -nn -i eth0 -w /var/tmp/syn-flood-example.pcap 'tcp[13]&2!=0 and tcp[13]&16==0' 

All what we have to do is not run our script to see the statistics.

I have to quickly explain the script itself. Once run it will prints on stdout a listing of found connections and additionally will create a log file with a name sslConnHigh.txt for only these connections that are over the threshold.

The parameters that you have to specify are:
  • param1 - tcpdump file (it has to have only SYN packets) 
  • param2 - time fractions in microseconds ( 1000000 microseconds -> 1 second ) 
  • param3 - connection threshold per time to log this result to a sslConnHigh.txt file

Examples

# Example 1: to see a  number of connection per 1 second 

$ python sslAnalyze.py  syn-flood-example.pcap 1000000 1

# Example 2: to see a number of connection per every 500ms (half a second)

$ python sslAnalyze.py  syn-flood-example.pcap 500000 1

# Example3: to see a number of connection per every 500ms (half a second) and log only
# these timestamps that have more than 100 connection in a single half a second
# some example output has been attached as well below

$ python sslAnalyze.py  syn-flood-example.pcap 500000 100

keeping the line: reading from file syn-flood-example.pcap, link-type EN10MB (Ethernet)
                     date     timestamp     sumOfConn [... 500000 microsecond periods ... ]
 Tue Aug 14 23:33:30 2012    1344983610       sum:183     0  183 
 Tue Aug 14 23:33:31 2012    1344983611        sum:95     6   89 
 Tue Aug 14 23:33:32 2012    1344983612       sum:614   430  184 
 Tue Aug 14 23:33:33 2012    1344983613       sum:520   216  304 

To better understand why F5 logs the error message and what trigger the TPS log error messages we have to run this command:

# 10 milliseconds = 10000 microseconds
$ python sslAnalyze.py  syn-flood-example.pcap 10000 [F5_SSL_total_TPS]
$ cat sslConnHigh.txt

In the output you are going to see the timestamps (rounded to 1 second) where the number of connections in a single 10ms window are above the licensing limit you device has. For further analize you can extract these data from the tcpdump with a help of tcpslice tool.


# 1268649656 is an example timestamp from above
$ tcpslice 1268649656  +1 syn-flood-example.pcap -w 1268649656.pcap

$ tcpdump -tt -nr 1268649656.pcap

reading from file 1268649656.pcap, link-type EN10MB (Ethernet)
1268649656.042723 vlan 4093, p 0, IP 19.26.168.192.4598 - 19.26.225.215.443: S 2973530156:2973530156(0) win 64512 mss 1460,nop,nop,sackOK
1268649656.056163 vlan 4093, p 0, IP 19.89.139.199.1622 - 19.26.225.23.443: S 1522394445:1522394445(0) win 64512 mss 1460,nop,wscale 0,nop,nop,sackOK

References
  1. https://github.com/rtomaszewski/experiments/blob/master/sslAnalyze.py
  2. http://www.hcsw.org/reading/nmapguide.txt
  3. http://danielmiessler.com/study/tcpdump/

Wednesday, March 28, 2012

Network data analize for a scenario with two Cloud Server behind a Rackspace Cloud Load Balancer

In this post we are going to take a look at the Rackspace Cloud Load Balancer (CLB) again but this time we aim to analyze a scenario with persistence issues and 2 pool member. Particularly, we want to find out and understand how the session persistence works on CLB. The simple diagram bellow shows our small test cloud topology. We have 3 Cloud Servers (CS) (1 client + 2 pool members) and one load balancer (LB). Our LB is using Round Robin algorithm to distribute the load amount the pool members. The pool members are defined using the internal IP addresses: 10.177.132.15  and 10.177.133.12.



Test scenario #1 without session persistence
  1. The client sends 1st HTTP request to LB1.
  2. The LB1 sends the req to first pool member according to round robin state.
  3. The pool member replays with HTTP 200.
  4. LB1 sends the replay back to the client.
  5. The client sends 2th HTTP request to LB1.
  6. The LB1 sends the req to another pool member.
  7. The pool member replays with HTTP 200.
  8. LB1 sends the replay back to the client.
Because we don't have access to the LB1 we are going to collect concurrent tcpdumps from all CS. We are using the curl to simulate HTTP requests on the client.

# run on client
tcpdump -nn -s0 -i any -w /var/tmp/client.pcap  port 80

# run on server1
tcpdump -nn -s0 -i any -w /var/tmp/server-urado1.pcap  port 80

# run on server2
tcpdump -nn -s0 -i any -w /var/tmp/server-urado2.pcap  port 80

# run on client to simulate HTTP requests
curl -v http://31.222.175.142

The first dumps and data from the test can be seen and found bellow [1].

[root@crado1 tmp]# tshark  -n -r client.pcap
  1   0.000000 31.222.191.246 -> 31.222.175.142 TCP 60925 > 80 [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=25369178 TSER=0 WS=4
  2   0.000513 31.222.175.142 -> 31.222.191.246 TCP 80 > 60925 [SYN, ACK] Seq=0 Ack=1 Win=17896 Len=0 MSS=8960 TSV=1091232983 TSER=25369178 WS=9
  3   0.000541 31.222.191.246 -> 31.222.175.142 TCP 60925 > 80 [ACK] Seq=1 Ack=1 Win=5840 Len=0 TSV=25369178 TSER=1091232983
  4   0.000609 31.222.191.246 -> 31.222.175.142 HTTP GET / HTTP/1.1
  5   0.000841 31.222.175.142 -> 31.222.191.246 TCP 80 > 60925 [ACK] Seq=1 Ack=159 Win=19456 Len=0 TSV=1091232983 TSER=25369178
  6   0.008348 31.222.175.142 -> 31.222.191.246 HTTP HTTP/1.1 200 OK  (text/html)
  7   0.008369 31.222.191.246 -> 31.222.175.142 TCP 60925 > 80 [ACK] Seq=159 Ack=301 Win=6912 Len=0 TSV=25369180 TSER=1091232983
  8   0.008651 31.222.191.246 -> 31.222.175.142 TCP 60925 > 80 [FIN, ACK] Seq=159 Ack=301 Win=6912 Len=0 TSV=25369180 TSER=1091232983
  9   0.008841 31.222.175.142 -> 31.222.191.246 TCP 80 > 60925 [FIN, ACK] Seq=301 Ack=160 Win=19456 Len=0 TSV=1091232983 TSER=25369180
 10   0.008855 31.222.191.246 -> 31.222.175.142 TCP 60925 > 80 [ACK] Seq=160 Ack=302 Win=6912 Len=0 TSV=25369180 TSER=1091232983
 11   2.182732 31.222.191.246 -> 31.222.175.142 TCP 60926 > 80 [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=25369723 TSER=0 WS=4
 12   2.183220 31.222.175.142 -> 31.222.191.246 TCP 80 > 60926 [SYN, ACK] Seq=0 Ack=1 Win=17896 Len=0 MSS=8960 TSV=1091233201 TSER=25369723 WS=9
 13   2.183240 31.222.191.246 -> 31.222.175.142 TCP 60926 > 80 [ACK] Seq=1 Ack=1 Win=5840 Len=0 TSV=25369723 TSER=1091233201
 14   2.183496 31.222.191.246 -> 31.222.175.142 HTTP GET / HTTP/1.1
 15   2.183658 31.222.175.142 -> 31.222.191.246 TCP 80 > 60926 [ACK] Seq=1 Ack=159 Win=19456 Len=0 TSV=1091233201 TSER=25369724
 16   2.186699 31.222.175.142 -> 31.222.191.246 HTTP HTTP/1.1 200 OK  (text/html)
 17   2.186717 31.222.191.246 -> 31.222.175.142 TCP 60926 > 80 [ACK] Seq=159 Ack=301 Win=6912 Len=0 TSV=25369724 TSER=1091233201
 18   2.188335 31.222.191.246 -> 31.222.175.142 TCP 60926 > 80 [FIN, ACK] Seq=159 Ack=301 Win=6912 Len=0 TSV=25369725 TSER=1091233201
 19   2.188756 31.222.175.142 -> 31.222.191.246 TCP 80 > 60926 [FIN, ACK] Seq=301 Ack=160 Win=19456 Len=0 TSV=1091233201 TSER=25369725
 20   2.188771 31.222.191.246 -> 31.222.175.142 TCP 60926 > 80 [ACK] Seq=160 Ack=302 Win=6912 Len=0 TSV=25369725 TSER=1091233201

We can see that from the client point of view the data comes always from the some IP address. The 2 pool members that we have are not visible.

The dumps bellow show as well that the responses came from different servers. We can confirm as well that neither the req or replay had a cookies header that was passed to the client.

root@urado1:/var/tmp# tshark -n -r server-urado1.pcap -V http
    GET / HTTP/1.1\r\n
        [Expert Info (Chat/Sequence): GET / HTTP/1.1\r\n]
            [Message: GET / HTTP/1.1\r\n]
            [Severity level: Chat]
            [Group: Sequence]
        Request Method: GET
        Request URI: /
        Request Version: HTTP/1.1
    User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5\r\n
    X-Forwarded-For: 31.222.191.246\r\n
    Accept: */*\r\n
    X-Forwarded-Proto: http\r\n
    Host: 31.222.175.142\r\n
    X-Cluster-Client-Ip: 31.222.191.246\r\n
    \r\n

    HTTP/1.1 200 OK\r\n
        [Expert Info (Chat/Sequence): HTTP/1.1 200 OK\r\n]
            [Message: HTTP/1.1 200 OK\r\n]
            [Severity level: Chat]
            [Group: Sequence]
        Request Version: HTTP/1.1
        Status Code: 200
        Response Phrase: OK
    Date: Tue, 27 Mar 2012 22:37:17 GMT\r\n
    Server: Apache/2.2.20 (Ubuntu)\r\n
    Last-Modified: Mon, 26 Mar 2012 21:29:38 GMT\r\n
    ETag: "7c030-2c-4bc2c1245f480"\r\n
    Accept-Ranges: bytes\r\n
    Content-Length: 44\r\n
        [Content length: 44]
    Vary: Accept-Encoding\r\n
    Content-Type: text/html\r\n
    \r\n
Line-based text data: text/html
    It works! urado1\n
    \n            


root@urado2:/var/tmp# tshark -n -r server-urado2.pcap -V http
    GET / HTTP/1.1\r\n
        [Expert Info (Chat/Sequence): GET / HTTP/1.1\r\n]
            [Message: GET / HTTP/1.1\r\n]
            [Severity level: Chat]
            [Group: Sequence]
        Request Method: GET
        Request URI: /
        Request Version: HTTP/1.1
    User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5\r\n
    X-Forwarded-For: 31.222.191.246\r\n
    Accept: */*\r\n
    X-Forwarded-Proto: http\r\n
    Host: 31.222.175.142\r\n
    X-Cluster-Client-Ip: 31.222.191.246\r\n
    \r\n

    HTTP/1.1 200 OK\r\n
        [Expert Info (Chat/Sequence): HTTP/1.1 200 OK\r\n]
            [Message: HTTP/1.1 200 OK\r\n]
            [Severity level: Chat]
            [Group: Sequence]
        Request Version: HTTP/1.1
        Status Code: 200
        Response Phrase: OK
    Date: Tue, 27 Mar 2012 22:37:21 GMT\r\n
    Server: Apache/2.2.20 (Ubuntu)\r\n
    Last-Modified: Mon, 26 Mar 2012 21:30:02 GMT\r\n
    ETag: "5c033-2c-4bc2c13b42a80"\r\n
    Accept-Ranges: bytes\r\n
    Content-Length: 44\r\n
        [Content length: 44]
    Vary: Accept-Encoding\r\n
    Content-Type: text/html\r\n
    \r\n
Line-based text data: text/html
    It works! urado2\n
    \n

Test scenario #2 with session persistence enabled on LB1

The test steps are the same as the one before. The only difference is a configuration change on LB1, where we enable persistence feature. The new tcpdumps were taken as before and can be downloaded from [2]. The easy way to see the differences without repeating ourselves again is to take a look at the curl output. We see directly there all our requests and replays with the HTML payload from the servers.

First GET request without Cookie header. In the response we see this time that the LB is sending one cookie we didn't see before.

[root@crado1 ~]# curl -v http://31.222.175.142
* About to connect() to 31.222.175.142 port 80
*   Trying 31.222.175.142... connected
* Connected to 31.222.175.142 (31.222.175.142) port 80
> GET / HTTP/1.1
> User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5
> Host: 31.222.175.142
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: Apache/2.2.20 (Ubuntu)
< Vary: Accept-Encoding
< Content-Type: text/html
< Date: Tue, 27 Mar 2012 22:57:56 GMT
< Accept-Ranges: bytes
< ETag: "7c030-2c-4bc2c1245f480"
< Set-Cookie: X-Mapping-fjhppofk=AEC8609A6667F8E6AC1B323F80BDF8C9; path=/
< Last-Modified: Mon, 26 Mar 2012 21:29:38 GMT
< Content-Length: 44
It works! urado1

* Connection #0 to host 31.222.175.142 left intact
* Closing connection #0

Another similar request. The response contains again Cookie header but with different value. The HTML payload from the 2 requests show that these are data from CS1 and CS2.

[root@crado1 ~]# curl -v http://31.222.175.142
* About to connect() to 31.222.175.142 port 80
*   Trying 31.222.175.142... connected
* Connected to 31.222.175.142 (31.222.175.142) port 80
> GET / HTTP/1.1
> User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5
> Host: 31.222.175.142
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: Apache/2.2.20 (Ubuntu)
< Vary: Accept-Encoding
< Content-Type: text/html
< Date: Tue, 27 Mar 2012 22:58:04 GMT
< Accept-Ranges: bytes
< ETag: "5c033-2c-4bc2c13b42a80"
< Set-Cookie: X-Mapping-fjhppofk=0459144121542E9668F8271676341C7B; path=/
< Last-Modified: Mon, 26 Mar 2012 21:30:02 GMT
< Content-Length: 44
It works! urado2

* Connection #0 to host 31.222.175.142 left intact
* Closing connection #0

We sent another request but this time the client supply the Cookie header provided before. We use the value returned for the Server2. Base on the HTML payload we see that the response comes from previously selected pool member. The LB doesn't provide another cookie as it was before.

[root@crado1 ~]# curl -v -H 'Cookie: X-Mapping-fjhppofk=0459144121542E9668F8271676341C7B' 31.222.175.142 
* About to connect() to 31.222.175.142 port 80
*   Trying 31.222.175.142... connected
* Connected to 31.222.175.142 (31.222.175.142) port 80
> GET / HTTP/1.1
> User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5
> Host: 31.222.175.142
> Accept: */*
> Cookie: X-Mapping-fjhppofk=0459144121542E9668F8271676341C7B
>
< HTTP/1.1 200 OK
< Date: Tue, 27 Mar 2012 22:58:35 GMT
< Server: Apache/2.2.20 (Ubuntu)
< Last-Modified: Mon, 26 Mar 2012 21:30:02 GMT
< ETag: "5c033-2c-4bc2c13b42a80"
< Accept-Ranges: bytes
< Content-Length: 44
< Vary: Accept-Encoding
< Content-Type: text/html
It works! urado2

* Connection #0 to host 31.222.175.142 left intact
* Closing connection #0

The some request sent again returns the some result - as expected.

[root@crado1 ~]# curl -v -H 'Cookie: X-Mapping-fjhppofk=0459144121542E9668F8271676341C7B' 31.222.175.142                 
* About to connect() to 31.222.175.142 port 80
*   Trying 31.222.175.142... connected
* Connected to 31.222.175.142 (31.222.175.142) port 80
> GET / HTTP/1.1
> User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5
> Host: 31.222.175.142
> Accept: */*
> Cookie: X-Mapping-fjhppofk=0459144121542E9668F8271676341C7B
>
< HTTP/1.1 200 OK
< Date: Tue, 27 Mar 2012 22:58:52 GMT
< Server: Apache/2.2.20 (Ubuntu)
< Last-Modified: Mon, 26 Mar 2012 21:30:02 GMT
< ETag: "5c033-2c-4bc2c13b42a80"
< Accept-Ranges: bytes
< Content-Length: 44
< Vary: Accept-Encoding
< Content-Type: text/html
It works! urado2

* Connection #0 to host 31.222.175.142 left intact
* Closing connection #0

Summary
The session persistence on the Rackspace Cloud Load Balancer uses HTTP cookie header. The load balancer when it returns the response back to the client it rewrites the IP addresses and inserts a new HTTP Cookie header. As long as the client obeys the Cookie value in his subsequent request it will be load balanced to the some pool member.

References
[1] persistence disabled
[2] persistence enabled

Tuesday, March 27, 2012

Network data analize for simple scenario with Rackspace Cloud Load Balancer (CLB)

A load balancer becomes de facto a standard for many distributed deployments. You are going to need it even if you don't know it yet.

Let's take a look at the Rackspace Cloud Load Balancer (CLB) feature [1] and dig a litter bit more how it actually works. We are going to base our analysis on the diagram below. As the scenario is very simple to set up and configure I'm leaving it for the reader alone.




Test scenario
  1. The client sends HTTP request to LB1.
  2. The LB1 load balance it to his only one pool member.
  3. The pool member replays with HTTP 200.
  4. LB1 sends the replay back to the client.

As we don't have access to the LB1 itself it is hard to say what is really happening there. The only way at the moment is to take concurrent tcpdumps on the client and server. To simulate the client requests we are going to use the curl. All commands look like that:

# run on client
tcpdump -nn -s0 -i any -w /var/tmp/client.pcap  port 80

# run on server
tcpdump -nn -s0 -i any -w /var/tmp/server-urado1.pcap  port 80

# run on client on a separate session for example
curl -v http://31.222.175.142

The data from the captured tcpdump can be seen below. The original files can be found here [2].

[root@crado1 tmp]# tshark  -n -r client.pcap
  1   0.000000 31.222.191.246 -> 31.222.175.142 TCP 47267 > 80 [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=3064226 TSER=0 WS=4
  2   0.000324 31.222.175.142 -> 31.222.191.246 TCP 80 > 47267 [SYN, ACK] Seq=0 Ack=1 Win=17896 Len=0 MSS=8960 TSV=1082311203 TSER=3064226 WS=9
  3   0.000344 31.222.191.246 -> 31.222.175.142 TCP 47267 > 80 [ACK] Seq=1 Ack=1 Win=5840 Len=0 TSV=3064226 TSER=1082311203
  4   0.000441 31.222.191.246 -> 31.222.175.142 HTTP GET / HTTP/1.1
  5   0.000565 31.222.175.142 -> 31.222.191.246 TCP 80 > 47267 [ACK] Seq=1 Ack=159 Win=19456 Len=0 TSV=1082311203 TSER=3064226
  6   0.018854 31.222.175.142 -> 31.222.191.246 HTTP HTTP/1.1 200 OK  (text/html)
  7   0.018873 31.222.191.246 -> 31.222.175.142 TCP 47267 > 80 [ACK] Seq=159 Ack=301 Win=6912 Len=0 TSV=3064231 TSER=1082311205
  8   0.019107 31.222.191.246 -> 31.222.175.142 TCP 47267 > 80 [FIN, ACK] Seq=159 Ack=301 Win=6912 Len=0 TSV=3064231 TSER=1082311205
  9   0.019323 31.222.175.142 -> 31.222.191.246 TCP 80 > 47267 [FIN, ACK] Seq=301 Ack=160 Win=19456 Len=0 TSV=1082311205 TSER=3064231
 10   0.019337 31.222.191.246 -> 31.222.175.142 TCP 47267 > 80 [ACK] Seq=160 Ack=302 Win=6912 Len=0 TSV=3064231 TSER=1082311205

root@urado1:~/tmp# tshark -n -r server-urado1.pcap
  1   0.000000 10.190.254.7 -> 10.177.132.15 TCP 76 40293 > 80 [SYN] Seq=0 Win=17920 Len=0 MSS=8960 SACK_PERM=1 TSval=1082311203 TSecr=0 WS=512
  2   0.000061 10.177.132.15 -> 10.190.254.7 TCP 76 80 > 40293 [SYN, ACK] Seq=0 Ack=1 Win=14480 Len=0 MSS=1460 SACK_PERM=1 TSval=117399288 TSecr=1082311203 WS=4
  3   0.000382 10.190.254.7 -> 10.177.132.15 TCP 68 40293 > 80 [ACK] Seq=1 Ack=1 Win=17920 Len=0 TSval=1082311203 TSecr=117399288
  4   0.000390 10.190.254.7 -> 10.177.132.15 HTTP 321 GET / HTTP/1.1
  5   0.000422 10.177.132.15 -> 10.190.254.7 TCP 68 80 > 40293 [ACK] Seq=1 Ack=254 Win=15552 Len=0 TSval=117399288 TSecr=1082311203
  6   0.017298 10.177.132.15 -> 10.190.254.7 HTTP 368 HTTP/1.1 200 OK  (text/html)
  7   0.017790 10.190.254.7 -> 10.177.132.15 TCP 68 40293 > 80 [ACK] Seq=254 Ack=301 Win=19456 Len=0 TSval=1082311205 TSecr=117399293
  8   5.032706 10.177.132.15 -> 10.190.254.7 TCP 68 80 > 40293 [FIN, ACK] Seq=301 Ack=254 Win=15552 Len=0 TSval=117400546 TSecr=1082311205
  9   5.063446 10.190.254.7 -> 10.177.132.15 TCP 68 40293 > 80 [ACK] Seq=254 Ack=302 Win=19456 Len=0 TSval=1082311710 TSecr=117400546

Analysis

Base on the tcpdumps from the server we can clearly see that the LB1 is changing the original source ip address. It means that our server can't directly relay on the original IP of the client. 

Looking further at the payload we can see that as the traffic is sent from LB to the pool member the load balancer inserts additional headers into the original GET request.

[root@crado1 tmp]# tshark -n -r client.pcap -V http
    GET / HTTP/1.1\r\n
        Request Method: GET
        Request URI: /
        Request Version: HTTP/1.1
    User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5\r\n
    Host: 31.222.175.142\r\n
    Accept: */*\r\n
    \r\n

    HTTP/1.1 200 OK\r\n
        Request Version: HTTP/1.1
        Response Code: 200
    Date: Mon, 26 Mar 2012 21:50:16 GMT\r\n
    Server: Apache/2.2.20 (Ubuntu)\r\n
    Last-Modified: Mon, 26 Mar 2012 21:29:38 GMT\r\n
    ETag: "7c030-2c-4bc2c1245f480"\r\n
    Accept-Ranges: bytes\r\n
    Content-Length: 44\r\n
        [Content length: 44]
    Vary: Accept-Encoding\r\n
    Content-Type: text/html\r\n
    \r\n
Line-based text data: text/html
    <html><body>It works! urado1\n
    </body></html>\n

root@urado1:~/tmp# tshark -n -r server-urado1.pcap -V http
    GET / HTTP/1.1\r\n
        [Expert Info (Chat/Sequence): GET / HTTP/1.1\r\n]
            [Message: GET / HTTP/1.1\r\n]
            [Severity level: Chat]
            [Group: Sequence]
        Request Method: GET
        Request URI: /
        Request Version: HTTP/1.1
    User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5\r\n
    X-Forwarded-For: 31.222.191.246\r\n
    Accept: */*\r\n
    X-Forwarded-Proto: http\r\n
    Host: 31.222.175.142\r\n
    X-Cluster-Client-Ip: 31.222.191.246\r\n
    \r\n
    [Full request URI: http://31.222.175.142/]

    HTTP/1.1 200 OK\r\n
        [Expert Info (Chat/Sequence): HTTP/1.1 200 OK\r\n]
            [Message: HTTP/1.1 200 OK\r\n]
            [Severity level: Chat]
            [Group: Sequence]
        Request Version: HTTP/1.1
        Status Code: 200
        Response Phrase: OK
    Date: Mon, 26 Mar 2012 21:50:16 GMT\r\n
    Server: Apache/2.2.20 (Ubuntu)\r\n
    Last-Modified: Mon, 26 Mar 2012 21:29:38 GMT\r\n
    ETag: "7c030-2c-4bc2c1245f480"\r\n
    Accept-Ranges: bytes\r\n
    Content-Length: 44\r\n
        [Content length: 44]
    Vary: Accept-Encoding\r\n
    Content-Type: text/html\r\n
    \r\n
Line-based text data: text/html
    It works! urado1\n
    \n

Summary

As the load balancer manipulates the IP addresses on the client and server site it still provides a method to identify the original source ip address of the client. There will be inserted a HTTP header X-Forwarded-For with the value of the original client IP address.


Sunday, January 1, 2012

One line Bash script debugging

Working on the Bash shell can be very effective. You can combine various command line programs and chain (pipe) them together to accomplish a bigger task. Sometimes you have to debug your one line scripts although.

When working on the CLI I wrote in a hurry a small command to find and check the value of the sched_autogroup_enabled Linux  kernel variable [1] under the proc file system.

To my first surprise it didn't work at all.

root@udesktop:/proc# find . -name \*sched\* 2>/dev/null  | grep -v [0-9]
root@udesktop:/proc# 

It is easy to find this file manualy and I did it. Below is the prove that the file exist that I was looking for.

root@udesktop:/proc# ls -la ./sys/kernel/sched_autogroup_enabled
-rw-r--r-- 1 root root 0 2012-01-01 21:38 ./sys/kernel/sched_autogroup_enabled

Problem
How to debug one line bash scripts. Or in general how to debug any bash script.

Solution
The problem is easy to see once we enable more verbose debug output from the Bash

root@udesktop:/proc# set -v -x
root@udesktop:/proc# find . -name \*sched\* 2>/dev/null  | grep -v [0-9]
find . -name \*sched\* 2>/dev/null  | grep -v [0-9]
+ find . -name '*sched*'
+ grep --color=auto -v 1 2 3 5 6 7 8 9

We see that the string '[0-9]' is extended by the bash before the command chain is actually executed.

Once we know that the problem is how our regular expression [2] is evaluated the fix is simple:

root@udesktop:/proc# find . -name \*sched\* 2>/dev/null  | grep -v '[0-9]'
find . -name \*sched\* 2>/dev/null  | grep -v '[0-9]'
+ find . -name '*sched*'
+ grep --color=auto -v '[0-9]'
./schedstat
./sched_debug
./sys/kernel/sched_child_runs_first
./sys/kernel/sched_min_granularity_ns
./sys/kernel/sched_latency_ns
./sys/kernel/sched_wakeup_granularity_ns
./sys/kernel/sched_tunable_scaling
./sys/kernel/sched_migration_cost
./sys/kernel/sched_nr_migrate
./sys/kernel/sched_time_avg
./sys/kernel/sched_shares_window
./sys/kernel/sched_rt_period_us
./sys/kernel/sched_rt_runtime_us
./sys/kernel/sched_compat_yield
./sys/kernel/sched_autogroup_enabled
./sys/kernel/sched_domain

References
[1]
Benefiting of sched_autogroup_enabled on the desktop
http://unix.stackexchange.com/questions/9069/benefiting-of-sched-autogroup-enabled-on-the-desktop

The ~200 Line Linux Kernel Patch That Does Wonders
http://www.phoronix.com/scan.php?page=article&item=linux_2637_video&num=1

[2]
Bash Reference Manual
http://www.gnu.org/software/bash/manual/bashref.html#Filename-Expansion

Debugging Bash scripts
http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_02_03.html

Sunday, May 1, 2011

Gdisk error message: Caution: invalid main GPT header, but valid backup; regenerating main header from backup!

At some point when experimenting with partitions on your disk you may get the following error message.

% gdisk /dev/sda
GPT fdisk (gdisk) version 0.7.1

Caution: invalid main GPT header, but valid backup; regenerating main header
from backup!

Caution! After loading partitions, the CRC doesn't check out!
Warning! Main partition table CRC mismatch! Loaded backup partition table
instead of main partition table!

Warning! One or more CRCs don't match. You should repair the disk!

Partition table scan:
  MBR: MBR only
  BSD: not present
  APM: not present
  GPT: damaged

Found valid MBR and corrupt GPT. Which do you want to use? (Using the
GPT MAY permit recovery of GPT data.)
 1 - MBR
 2 - GPT
 3 - Create blank GPT

As always you have to be very careful depending what you did and what do you want to do as next.

The three options for gdisk gives you the choice what partition tables you will examine. Fortunately for us gpart is not going to write on disk anything until we say so by executing the w (write) command (example bellow). The three options only influences how the data on the disk are going to be interpreted and presented for our review.

# gdisk /dev/sda
Command (? for help): h
b    back up GPT data to a file
c    change a partition's name
d    delete a partition
i    show detailed information on a partition
l    list known partition types
n    add a new partition
o    create a new empty GUID partition table (GPT)
p    print the partition table
q    quit without saving changes
r    recovery and transformation options (experts only)
s    sort partitions
t    change a partition's type code
v    verify disk
w    write table to disk and exit
x    extra functionality (experts only)
?    print this menu

At this stage it is save to experiment and have a look how the partition tables look like. Depending on your choise different partitions may be printed ('p') .

REMEMBER to always quit the session with the 'q' (quit) and never with 'w' otherwise your experiments will be permanently saved on the disk.

In my case this error was misleading and confusing only. I could verify that my current partition schema was not GPT at all but the good old MSDOS one [1]. At this stage I knew that 'gdisk' is not the tool I wanted to use and I finished creating new partitions with 'parted'. I didn't have to use gdisk at all.

The reason why I had some parts of GPT data on my disk is unknown. I can only suspect that it was created when I played with the Windows tool 'EasyBCD'.

I could try to delete the GPT data but base on [2] never had to do it.

References
[1] How to find what type of disk partition schema do I use (msdos, gpt)

[2] Wiping Out Old GPT Data

How to find what type of disk partition schema do I use (msdos, gpt)

There are many disk partition schemes that can be used. Very likely on most x86 base computer the choice is often limited by the operating system we use although.

The list of supported labels (partition schemes) in GNU parted tool can be found here [1]:


label-type must be one of these supported disk labels:

* bsd
* loop (raw disk access)
* gpt
* mac
* msdos
* pc98
* sun


To find out what type of partition schema we currently have on your system please run the bellow command and check the value for the 'Partition Table'.

% parted /dev/sda unit mb print
Model: ATA ST2000DL003-9VT1 (scsi)
Disk /dev/sda: 2000399MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start     End       Size      Type     File system  Flags
 1      0.03MB    104856MB  104856MB  primary  ntfs         boot
 2      104857MB  209714MB  104858MB  primary  ntfs
 3      209714MB  312115MB  102401MB  primary
                                      
root@sysresccd /tmp % parted /dev/sda unit s print
Model: ATA ST2000DL003-9VT1 (scsi)
Disk /dev/sda: 3907029168s
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start       End         Size        Type     File system  Flags
 1      63s         204796619s  204796557s  primary  ntfs         boot
 2      204797952s  409597951s  204800000s  primary  ntfs
 3      409597952s  609599487s  200001536s  primary


References:

[1]
(parted) mklabel msdos

Others:

Parted User's Manual

GUID Partition Table

Make the most of large drives with GPT and Linux

Fun with GPT partitioning

Linux Creating a Partition Size Larger than 2TB

Howto capture and record the console screen output to a file on disk

The basic of capturing a data to a file require simple redirection or using of 'tee' program for example.

Examples:

$ ls -la > /var/tmp/output.ls.txt
$ ls -la | tee /var/tmp/output.ls.txt

But sometimes programs can be interactive or we simply want to capture all our session without to worry to redirect the stdout to a file.

The solution is to use the 'screen' tool.

Example #1: Screen basic usage

# to start the session with a name 'example' run
$screen -h
$screen -S example

# to leave the screen session type: CONTROL-a d
# you are placed back in the original shell

# to reattach to the created session
$ screen -ls
$ screen -r example

Example #1: Enable screen logging

The options '-L' instruct screen to create a log file that will capture all the commands output in the screen session.

$ screen -S logexample -L

# inside the screen session
$ echo 'some output'

# to leave the screen session type: CONTROL-a d

# this is the default log file for screen
$ ls -la screenlog.0

$ cat screenlog.0

Automatic debugging session of a programs in C and the gdb analyse of its core dumps

In our example we are going to concentrate on this little program bad_program to demonstrate our semi automated debug approach.

For debugging purposes we should first compile the program with debugging symbols. The possible ways are listed bellow and should be taken only as simple examples how to do this. More about this in [1].

# no debugging at all
 (0) $ gcc bad_program.c -o bad_program

# with debugging symbols  
 (1) $ gcc -g3     bad_program.c -o bad_program1
 (2) $ gcc -g      bad_program.c -rdynamic -o bad_program2
 (3) $ gcc -g  -O0 bad_program.c -rdynamic -o bad_program3
 (4) $ gcc -g3 -Os bad_program.c -o bad_program4

The test has been run on a following system.

$ gcc -v
Using built-in specs.
Target: i486-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu 4.4.3-4ubuntu5' --with-bugurl=file:///usr/share/doc/gcc-4.4/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --enable-shared --enable-multiarch --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.4 --program-suffix=-4.4 --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug --enable-plugin --enable-objc-gc --enable-targets=all --disable-werror --with-arch-32=i486 --with-tune=generic --enable-checking=release --build=i486-linux-gnu --host=i486-linux-gnu --target=i486-linux-gnu
Thread model: posix
gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5) 

$ gdb -v
GNU gdb (GDB) 7.1-ubuntu

The example program itself is written in such a way that it crushes and generates core dump each time when started.

Example 1

$ ./bad_program 1
Segmentation fault

To get the core dump file written on a disk that can be later analysed in gnu debugger (gdb) we need to first allow core dumps.

$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 20
file size               (blocks, -f) unlimited
pending signals                 (-i) 16382
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) unlimited
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

$ ulimit -c unlimited 

When we run the program again it will create the core file for later analyse.

Example 2

$ ./bad_program 1
Segmentation fault (core dumped)

$ ls -la core 
-rw------- 1 radoslaw radoslaw 151552 2011-04-24 19:50 core

In our example we only want to run the gdb 'where' command, but if needed the file my_session.gdb.cmds bellow can be extended for any number of commands we may be interested in. More about the useful gdb debugging commands can be found in [2].

$ cat my_session.gdb.cmds 
where

Our test session can look like:

Debugging session #1

$ export BAD_PROGRAM=bad_program
$ for i in $(seq 1 4); do rm -f core; echo;  echo " ---- ---- ---- [$i] starting the program ---- ---- ----"; ./$BAD_PROGRAM $i ; echo " ---- ---- ---- [$i] starting gdb ---- ---- ----"; gdb -batch -x my_session.gdb.cmds -n $BAD_PROGRAM core ; done | tee ${BAD_PROGRAM}.log

 ---- ---- ---- [1] starting the program ---- ---- ----
 ---- ---- ---- [1] starting gdb ---- ---- ----

warning: Can't read pathname for load map: Input/output error.[New Thread 16934]

1051 ../sysdeps/i386/i686/multiarch/memcpy-ssse3.S: No such file or directory.
Core was generated by `./bad_program 1'.
Program terminated with signal 11, Segmentation fault.
#0  __memcpy_ssse3 () at ../sysdeps/i386/i686/multiarch/memcpy-ssse3.S:1051
 in ../sysdeps/i386/i686/multiarch/memcpy-ssse3.S
#0  __memcpy_ssse3 () at ../sysdeps/i386/i686/multiarch/memcpy-ssse3.S:1051
#1  0x00e9073c in ?? () from /lib/ld-linux.so.2
#2  0x0804844d in generate_core ()
#3  0x08048566 in main ()

 ---- ---- ---- [2] starting the program ---- ---- ----
 ---- ---- ---- [2] starting gdb ---- ---- ----
[New Thread 16938]

warning: Can't read pathname for load map: Input/output error.
Core was generated by `./bad_program 2'.
Program terminated with signal 11, Segmentation fault.
#0  0x0804852b in try_core2 ()
#0  0x0804852b in try_core2 ()
#1  0x0804848e in generate_core ()
#2  0x08048566 in main ()

 ---- ---- ---- [3] starting the program ---- ---- ----
 ---- ---- ---- [3] starting gdb ---- ---- ----

warning: [New Thread 16942]
Can't read pathname for load map: Input/output error.
Core was generated by `./bad_program 3'.
Program terminated with signal 11, Segmentation fault.
#0  0x08048541 in try_core3 ()
#0  0x08048541 in try_core3 ()
#1  0x080484cf in generate_core ()
#2  0x08048566 in main ()

 ---- ---- ---- [4] starting the program ---- ---- ----
 ---- ---- ---- [4] starting gdb ---- ---- ----
[New Thread 16946]

warning: Can't read pathname for load map: Input/output error.
Core was generated by `./bad_program 4'.
Program terminated with signal 11, Segmentation fault.
#0  0x08048541 in try_core3 ()
#0  0x08048541 in try_core3 ()
#1  0x080484cf in generate_core ()
#2  0x080484eb in generate_core ()
#3  0x08048566 in main ()

We can still see enough to say where the problem happened more or less but we get much better results when a program is run with debugging symbols.

Debugging session #2

$ export BAD_PROGRAM=bad_program1
$ for i in $(seq 1 4); do rm -f core; echo;  echo " ---- ---- ---- [$i] starting the program ---- ---- ----"; ./$BAD_PROGRAM $i ; echo " ---- ---- ---- [$i] starting gdb ---- ---- ----"; gdb -batch -x my_session.gdb.cmds -n $BAD_PROGRAM core ; done | tee ${BAD_PROGRAM}.log

 ---- ---- ---- [1] starting the program ---- ---- ----
 ---- ---- ---- [1] starting gdb ---- ---- ----

warning: [New Thread 17974]
Can't read pathname for load map: Input/output error.
1051 ../sysdeps/i386/i686/multiarch/memcpy-ssse3.S: No such file or directory.
Core was generated by `./bad_program1 1'.
Program terminated with signal 11, Segmentation fault.
#0  __memcpy_ssse3 () at ../sysdeps/i386/i686/multiarch/memcpy-ssse3.S:1051
 in ../sysdeps/i386/i686/multiarch/memcpy-ssse3.S
#0  __memcpy_ssse3 () at ../sysdeps/i386/i686/multiarch/memcpy-ssse3.S:1051
#1  0x00f6973c in ?? () from /lib/ld-linux.so.2
#2  0x0804844d in generate_core (n=1) at bad_program.c:15
#3  0x08048566 in main (argc=2, argv=0xbfb572a4) at bad_program.c:52

 ---- ---- ---- [2] starting the program ---- ---- ----
 ---- ---- ---- [2] starting gdb ---- ---- ----

warning: [New Thread 17978]
Can't read pathname for load map: Input/output error.
Core was generated by `./bad_program1 2'.
Program terminated with signal 11, Segmentation fault.
#0  0x0804852b in try_core2 (n=2) at bad_program.c:41
41   *ptr=n;
#0  0x0804852b in try_core2 (n=2) at bad_program.c:41
#1  0x0804848e in generate_core (n=2) at bad_program.c:20
#2  0x08048566 in main (argc=2, argv=0xbff99024) at bad_program.c:52

 ---- ---- ---- [3] starting the program ---- ---- ----
 ---- ---- ---- [3] starting gdb ---- ---- ----
[New Thread 17982]

warning: Can't read pathname for load map: Input/output error.
Core was generated by `./bad_program1 3'.
Program terminated with signal 11, Segmentation fault.
#0  0x08048541 in try_core3 (n=3) at bad_program.c:47
47   *(ptr+n)=n; 
#0  0x08048541 in try_core3 (n=3) at bad_program.c:47
#1  0x080484cf in generate_core (n=3) at bad_program.c:25
#2  0x08048566 in main (argc=2, argv=0xbf92fdd4) at bad_program.c:52

 ---- ---- ---- [4] starting the program ---- ---- ----
 ---- ---- ---- [4] starting gdb ---- ---- ----
[New Thread 17986]

warning: Can't read pathname for load map: Input/output error.
Core was generated by `./bad_program1 4'.
Program terminated with signal 11, Segmentation fault.
#0  0x08048541 in try_core3 (n=3) at bad_program.c:47
47   *(ptr+n)=n; 
#0  0x08048541 in try_core3 (n=3) at bad_program.c:47
#1  0x080484cf in generate_core (n=3) at bad_program.c:25
#2  0x080484eb in generate_core (n=4) at bad_program.c:29
#3  0x08048566 in main (argc=2, argv=0xbfdd2994) at bad_program.c:52

We can see in lines # 16, 27, 40, 53 the instructions that caused the core dump. We see as well the full arguments in functions what helps to better understand the program logic flow. One thing more as well to notice is the difference in debugging output when analysing the core from:

./bad_program  1   # no debugging symbols; versus
./bad_program1 1   # with debugging symbols

From the debugging session #1 we can hardly guess where the problem was, where in debugging session #2 we clearly see that the problem started with line 'bad_program.c:15'.

Example program

The "bad" example program that cores every time when run is bellow. More info about this in [3].

Source code of the bad_program.c 

#include 
#include 
#include 
#include 
#include 

void try_core1(int n);
void try_core2(int n);
void try_core3(int n);
void generate_core(int n);

void generate_core( int n ) {
  if ( 1 == (n%10) ) { 
    try_core1(n);
    generate_core(n-1);
  } 
  
  if ( 2 == (n%10) ) { 
    try_core2(n);
    generate_core(n-1);
  } 
  
  if ( 3 == (n%10) ) { 
    try_core3(n);
    generate_core(n-1);
  } 
  
  generate_core(n-1);
}

void try_core1( int n ) {
  char *ptr=NULL;
  
  strcpy(ptr, "this is going to hurt ;)...");
}

void try_core2( int n ) {
  int *ptr=NULL;
  
  *ptr=n;
}

void try_core3( int n ) {
  int *ptr;
  
  *(ptr+n)=n; 
}


int main(int argc, char **argv) {
  generate_core( atoi( argv[1] ) );
}

References:
[1]
HowTo: Debug Crashed Linux Application Core Files Like A Pro
Debugging with GDB

[2]
Mastering Linux debugging techniques
Linux software debugging with GDB
GDB: The GNU Project Debugger

[3]
How to programatically cause a core dump in C/C++