Debugging an SDK Application on the Multiservices PIC

To debug an application on the Multiservices PIC, you can use logging and tracing functionality similar to what is available on the Routing Engine.

In addition, you can:

Accessing the Multiservices PIC

The /var/tmp directory on the Routing Engine is mounted to /var/crash on the PIC. This directory can be used to transfer files from Routing Engine to PIC and from PIC to Routing Engine. Unix commands are available on the PIC. Some useful ones are ps, ls, and top.

All commands are issued from the Routing Engine with root privileges. Login on the PIC is root, with no password.

Console Access on the PIC

Console access on M10i and M7i routers is as follows:

vty sbr
# pty pic-number

pic-number is chosen as shown next:

ms-0/0/0						0
ms-0/1/0 						1
ms-0/2/0 						2
ms-0/3/0 						3

ms-1/0/0 						4
ms-1/1/0 						5
ms-1/2/0 						6
ms-1/3/0 						7

For example:

ms-1/3/0
vty sbr
pty 7

Console access on MX routers is as follows:

cty -0 -f fpc<fpc-number> pic<pic-number>

For example:

ms-1/0/0
cty -0 -f fpc1 pic0

Console access for all other platforms is as follows:

vty fpc-number
pty pic-number

For example:

ms-1/3/0
vty fpc1
pty 3

Using telnet to Reach the PIC

The following example shows how you can telnet to an internal routing instance on the PIC.

regress@gauntlet> show interfaces terse pc*
Interface               Admin Link Proto    Local                 Remote
pc-1/2/0                up    up  
pc-1/2/0.16383          up    up   inet     10.0.0.1            --> 10.0.0.34
                                            128.0.0.1           --> 128.0.3.17
pc-1/2/0.16384          up    up   inet     20.0.0.1            --> 20.0.0.34


telnet -Ji 128.0.3.17

To access the Routing Engine in the previous example, you would enter the Local value:

telnet -Ji 128.0.0.1

Logging and Tracing on the Multiservices PIC

The components of SDK applications running on the Multiservices PIC can send log messages to syslog destinations on the Routing Engine or the PIC console, via configuration settings in the Routing Engine CLI. SDK application components on the PIC can also write trace messages of a specific type to a specific file, also as configured in the CLI.

Logging on the Multiservices PIC

SDK applications can set log levels for the external, pfe, daemon, and kernel facilities, and can redirect the log messages either to the Routing Engine or to the console on the PIC.

For example, the following configurations specify, first, logging to the Routing Engine, and second, logging to the PIC console:

[edit]
chassis {
    fpc x{
        pic y {
           adaptive-services {
               service-package {
                  extension-provider {
                      syslog {
                        (kernel | daemon | pfe | external) {
                            <level>;
+                           routing-engine; 
                        }   
                     }   
                  }     
              }   
           }   
        }
    }
}

[edit]
chassis {
    fpc x{
        pic y {
            adaptive-services {
                service-package {
                    extension-provider {
                        syslog {
                            (kernel | daemon | pfe | external) {
                                <level>;
+                               destination pic-console;     
                            }   
                        }   
                    }     
                }   
            }   
        } 
    }
}

The following log levels are available:

  alert      -  Conditions that should be corrected immediately
  any        -  All levels
  critical   -  Critical conditions
  emergency  -  Panic conditions
  error      -  Error conditions
  info       -  Informational messages
  none       -  No messages
  notice     -  Conditions that should be handled in a specialized manner
  warning    -  Warning messages

Severity and Destination of Log Messages

The severity of the syslog messages generated by the management component of the SDK application could be different from that of the control or data components for the same SDK application on the Multiservices PIC.

In the Routing Engine, the severity of the log messages and the syslog destination (console, file, or both) is specified through the CLI, with

set system syslog file filename facility level

On the PIC, the same information is specified through the Routing Engine CLI as follows:

set chassis fpc fpc pic pic adaptive-services services-package extension-provider syslog syslog-destination faclility level

If syslog-destination is specified as routing-engine, messages from the PIC to the Routing Engine are filtered using the set system syslog ... settings on the Routing Engine. If syslog-destination is specified as the pic-console, the messages from the control and data components are logged to the PIC console.

When debugging an application, you can also set the destination through the mspdbg-cli (see Using the mspdbg-cli) as follows:

MSP-DEBUG> set msp syslog <destination>

where destination is one of:

    pic-console           sets the syslog destination to the PIC console.
    routing-engine        sets the  syslog destination to the Routing Engine.

Tracing on the Multiservices PIC

SDK applications can propagate trace options and trace file specifications to the control component of the application running on the PIC. To control tracing, you can add an object to the trace options in the input configuration DDL, and call the trace APIs to enable or disable tracing from your application's control component.

The management component on the Routing Engine calls the junos_trace_get_pic_trace_file_info() function in libjunos-sdk to obtain the information necessary for initializing the trace infrastructure on the PIC. junos_trace_get_pic_trace_file_info() reads the information from the daemon_trace_file_options_t structure that the system creates when your application calls junos_app_init() as part of its initialization operations, and fills in the pic_daemon_trace_file_options_t structure. The pic_daemon_trace_file_options_t structure contains a subset of those fields from daemon_trace_file_options_t that are necessary for initializing the tracing infrastructure in the PIC. The values in the structure are configured through the CLI and the fields are opaque to SDK applications.

The management component then sends the trace information to control component on the PIC using libconn. Once the control component receives the information, it can configure tracing by calling additional functions, such as: junos_trace_set_pic_trace_file_info(), trace_file_open(), trace_flag_set(), junos_trace_check_trace_level(), junos_syslog_get_config (), and junos_trace().

You set the trace log types in the traceoptions configuration for your application's management component. You specify the name of the trace file and its maximum size in the traceoptions flag setting. As the trace file fills up, the system copies it to a historical file, and new trace logs are written to a new file. The number of these historical files maintained is also configurable. (For more information about configuring traceoptions, see the SDK CLI Configuration section of this documentation, and the JUNOS System Basics Configuration Guide.)

For more information on using this functionality, see the programming task, Using Logging and Tracing on the Multiservices PIC.

Location of Trace Messages

Trace messages are written to /var/crash/filename in the Multiservices PIC. You can access the file on the Routing Engine in /var/tmp/filename.

If the global trace option is set to redirect the messages to a remote syslog destination, the messages from the application's management component are redirected to the remote destination, but the trace messages on the PIC are written to /var/tmp/filename on the Routing Engine. The option to redirect trace messages to a remote syslog is not supported on the PIC.

Effects of Connection Loss

If there is a connection loss between the Routing Engine and the Multiservices PIC, dynamic configuration changes made by the user through syslog and trace CLI commands do not take effect on the PIC. When the PIC reconnects, the syslog configuration changes are synchronized to the syslog.conf file on the PIC.

If there is a configuration change for tracing, the management component of the application should take care of resending the data to the control component on the PIC when the PIC comes back online.

Debugging Packet Flow

An incoming packet goes from an I/O PIC to the PFE for a route lookup, to the XLR MAC and through the FMN network to the poller threads. The poller threads feed data packets to the data loops' receive FIFOs in two different ways:

(For more information on how applications specify packet distribution, see Flow Affinity.)

From that point, the application can process the packet for classification, filtering, protocol processing, forwarding, and so forth.

The mspdbg-cli commands described in Using the mspdbg-cli display counters that you can use to identify overrun conditions; for example, you can use the show msp pot command to examine the packet-ordering CPU, and the show msp poller command to examine the poller CPUs.

The following figure is an overview of packet flow through the Multiservices PIC, when the default round-robin packet distribution scheme is operating, and indicates where various debugging tools are available.

packet-lifecycle-rr-g016885.gif

Debugging on the Multiservices PIC: Round Robin Packet Distribution

The following figure is the same overview, but with flow affinity packet distribution operating.

packet-lifecycle-flow-g016884.gif

Debugging on the Multiservices PIC: Flow Affinity Packet Distribution

The lifecycle stages illustrated in the figures are as follows:

  1. The packet arrives on the I/O PIC and enters the Packet Forwarding Engine (PFE.)

  2. The PFE forwards the data to the PIC based on the routes, filters, and service sets that have been configured. Routes can also forward control packets.

  3. The poller threads poll for the data packet and inject it into the receive (Rx) FIFOs. The kernel sends only the traffic marked as data traffic to the poller threads. Control traffic (everything not marked as data traffic) is sent to sockets that are open in applications.

  4. If you are using flow affinity, each packet is hashed to one system-determined data loop, and the packet is enqueued to a transmit (Tx) FIFO for forwarding.

  5. If you are using round-robin distribution (the default), packets are sprayed across all data loops and sent back to the MAC through the POT.

  6. The PFE performs another route lookup and forwards the packet outward.

Using tcpdump

You can use tcpdump on the PIC to debug traffic to control cores. For example:

tcpdump -ni ms0

Using the mspdbg-cli

The mspdbg-cli tool provides some insight into the PIC drivers. Launch it from the PIC as follows:

mspdbg-cli 

Use the show msp command shown next to display a summary of the available commands.

CPU Roles

The role ID in the output is an identifier that is unique over CPUs of the same type: for example, CTRL, USER APP, DATA APP. These three CPU types are generally where SDK applications run, but for plugins, the control event handler is called on a USER INT CPU (where INT represents INTERNAL because mspmand is a JUNOS internal daemon that calls the plugin code).

The following example shows output for two control cores and two data cores (leaving 4 user cores.) For details about the CPU architecture, see Architecture of the Services SDK.

root@ms12% mspdbg-cli
MSP-DEBUG> show msp cpumask
CPU roles:
 
CPU  0: CTRL      (role id 0)
CPU  1: POT
CPU  2: USER INT  (role id 0)
CPU  3: USER INT  (role id 1)
CPU  4: CTRL      (role id 1)
CPU  5: CTRL      (role id 2)
CPU  8: USER APP  (role id 0)
CPU  9: USER APP  (role id 1)
CPU  10: USER APP  (role id 2)
CPU  11: USER APP  (role id 3)
CPU  12: USER APP  (role id 4)
CPU  13: USER APP  (role id 5)
CPU  14: USER APP  (role id 6)
CPU  15: USER APP  (role id 7)
CPU  16: USER APP  (role id 8)
CPU  17: USER APP  (role id 9)
CPU  18: USER APP  (role id 10)
CPU  19: USER APP  (role id 11)
CPU  20: USER APP  (role id 12)
CPU  21: USER APP  (role id 13)
CPU  22: USER APP  (role id 14)
CPU  23: USER APP  (role id 15)
CPU  24: DATA POLL (role id 0)
CPU  25: DATA APP  (role id 0)
CPU  26: DATA APP  (role id 1)
CPU  27: DATA APP  (role id 2)
CPU  28: DATA POLL (role id 1)
CPU  29: DATA APP  (role id 3)
CPU  30: DATA APP  (role id 4)
CPU  31: DATA APP  (role id 5)

Statistics

You can display statistics for control CPUs, FIFO queues, poller CPUs and packet-ordering CPUs (POT).

MSP-DEBUG> show msp stats ?
    ctrl                  ctrl stats
    fifos                 fifo stats
    poller                poller stats
    pot                   pot cpu stats

For example, the following output displays statistics for the control CPUs in a system.

MSP-DEBUG> show msp stats ctrl

Ctrl Statistics (cpu role id 0):

Intr rcvd        6683
Intr pkts rcvd   6685
Intr frbk rcvd   11467
Frbk drained     902107
Intr fifo fail   0
Frbk fifo fail   0

pkts rcvd        6685
frbk rcvd        902107
mhdr alloc fail  0
ifd down         0

pkts xmit        902107
pkts xmit fail   12
mpool alloc fail 0

ptcl free        6697
ptcl free fail   2

To show statistics for a polling CPU, you use the role ID from the output of the show msp cpumask command to specify which CPU you want to see; if you do not specify a role number, the system reports statistics for the CPU with role number 0.

For example, the output shown earlier has:

CPU  24: DATA POLL (role id 0)
CPU  28: DATA POLL (role id 1)

The statistics are displayed as follows:

MSP-DEBUG> show msp stats poller 0
 
Poller Statistics (cpu role id 0):
 
pkts rcvd        12730
mgmt packets     0
ctrl packets     0
data packets     12730
frbk rcvd        0

pkts xmit        12717
pkts xmit fail   0    <---- A value here means the poller was unable to enqueue to the POT due
                              to excessive packet rate. (The system retries if the enqueue operation fails.)
data pkts freed  3133
 
mgmt fifo fail   0
ctrl fifo fail   0
data fifo fail   1076  <--- This means the poller failed because there was no space in the input FIFOs.
data fifo retry  5380  <--- This means the poller had to retry
frbk fifo fail   0
invalid app type 0
 
MSP-DEBUG> show msp stats poller 1
 
Poller Statistics (cpu role id 1):
 
pkts rcvd        0
mgmt packets     0
ctrl packets     0
data packets     0
frbk rcvd        0
 
pkts xmit        0
pkts xmit fail   0
data pkts freed  0
 
mgmt fifo fail   0
ctrl fifo fail   0
data fifo fail   0
data fifo retry  0
frbk fifo fail   0
invalid app type 0
invalid pool id  0
pktloop jbuf ref 0
thread status    1
 
MSP-DEBUG> show msp stats poller 2
Request for stats on poller 2 failed: Invalid argument

The final command fails because there was no role id for a poller number 2; there are only two pollers, 0 and 1.

Poller statistics can also tell you whether all packets transmitted were received, as well as the number of freebacks received. For example:

pkts rcvd        17680954  <----------
mgmt packets     0                    |
ctrl packets     0                    |
data packets     17680954             | If these values do not match, the application or POT 
                                      |   could be holding onto some packets.
frbk rcvd        0    <--- freebacks  |
                                      |
pkts xmit        17677885  <----------
pkts xmit fail   0    
data pkts freed  11306105

The following output displays statistics for a packet-ordering CPU:

MSP-DEBUG> show msp stats pot

POT Statistics (cpu role id 0):

pkts rcvd                         106223098
pkts inorder                      51217609
pkts outoforder                   55005489
seq  wrap_around                  0

pks xmit                          106216453
pkts xmit retry cnt               0  <-- indicates the underlying hardware is busy 
pkts rcvd with invalid seq. num   0

You display FIFO statistics using the next command, providing the role ID of the data CPU you want see.

MSP-DEBUG> show msp stats fifos 27
        Num Entries      0
        RX - PR Index    828
        TX - CR Index    828

CPU usage monitoring is enabled by default for the Multiservices PIC. You can control it manually using the commands set msp service-sets cpu-usage disable and set msp service-sets cpu-usage enable.

Commands to display CPU usage statistics are shown next:

MSP-DEBUG> show msp poll-cpu-usage

 POLL CPU Usage (cpu role id 0):

 CPU Usage (last 1 Sec)

 Rx Loop              0.00% 
 Tx Loop              0.00% 
 Freeback Loop        0.00% 
 Total                0.00% 

 CPU Usage (last 5 Sec)

 Rx Loop              0.00%
 Tx Loop              0.00%
 Freeback Loop        0.00%
 Total                0.00% 

 CPU Usage (Average)

 Rx Loop              0.32%
 Tx Loop              0.29%
 Freeback Loop        0.00%
 Total                0.61%

You can display CPU usage statistics for the packet-ordering CPU as follows:

MSP-DEBUG> show msp pot-cpu-usage            

 POT CPU Usage:

 CPU Usage (last 1 Sec)

 In Order Pkts:       0.00%
 Out of Order Pkts:   0.00%
 Total                0.00%

 CPU Usage (last 5 Sec)

 In Order Pkts:       0.00%
 Out Of Order Pkts:   0.00%
 Total                0.00%

 CPU Usage (Average)

 In Order Pkts:       0.11%
 Out Of Order Pkts:   0.19%
 Total                0.30%

Using the debugger-on-panic Option

By default, the PIC reboots on kernel panic or daemon failures. For debugging purposes, you can change this setting by configuring the debugger-on-panic option, as shown below:

regress@gauntlet# show interfaces pc-0/0/0 
	multiservice-options { 
          debugger-on-panic; 
        }

Using gdb

You can use gdb version 6.1.1 to debug processes on the PIC. gdb is located on the PIC at /usr/bin/gdb.

Preparing to Use gdb With Shared Libraries

To determine whether your daemon is using shared libraries, use the ldd command.

To debug a daemon that uses shared libraries, follow these steps:

  1. Copy the binary with symbols to /var/tmp on the Routing Engine.

  2. On the Routing Engine, create the directories /var/tmp/usr/lib and /var/tmp/usr/libexec.

  3. Copy libraries from the backing sandbox to the directories you created on the Routing Engine as follows:

  4. Launch gdb on the PIC.

Launching gdb

The following sequence shows how to launch gdb and the initial gdb output.

root@ms11% cd /var/crash/
root@ms11% gdb
GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "mips-marcel-freebsd".
(gdb) set solib-absolute-prefix .
(gdb) file <exe_name>
(gdb) core-file <core_file_name>

Debugging a Core File

Follow these steps to debug a core file:

  1. Look at the list of shared libraries needed by the system process, mspmand:

    root@ms11% ldd /usr/sbin/mspmand 
    /usr/sbin/mspmand:
            libisc.so.2 => /usr/lib/libisc.so.2 (0x405a4000)
            libjipc.so.1 => /usr/lib/libjipc.so.1 (0x405f1000)
            libutil.so.5 => /usr/lib/libutil.so.5 (0x40634000)
            libc.so.6 => /usr/lib/libc.so.6 (0x40684000)
    

  2. Make sure the shared libraries are in /var/crash:

    root@ms11% ls -laR /var/crash/usr
    total 16
    drwxr-xr-x  4 root  wheel   512 Jan 10 22:49 .
    drwsrwxrwx  8 root  wheel  1536 Jan 10 22:49 ..
    drwxr-xr-x  2 root  wheel   512 Jan 10 22:50 lib
    drwxr-xr-x  2 root  wheel   512 Jan 10 22:50 libexec
    
    /var/crash/usr/lib:
    total 16852
    drwxr-xr-x  2 root  wheel      512 Jan 10 22:50 .
    drwxr-xr-x  4 root  wheel      512 Jan 10 22:49 ..
    -rwxr-xr-x  1 930   930    7886407 Jan  3 06:37 libc.so.6
    -rwxr-xr-x  1 930   930     239643 Jan  3 06:30 libisc.so.2
    -rwxr-xr-x  1 930   930      55026 Jan  3 06:30 libjipc.so.1
    -rwxr-xr-x  1 930   930     376793 Jan  3 06:32 libutil.so.5
    
    /var/crash/usr/libexec:
    total 2088
    drwxr-xr-x  2 root  wheel      512 Jan 10 22:50 .
    drwxr-xr-x  4 root  wheel      512 Jan 10 22:49 ..
    -rwxr-xr-x  1 930   930    1040322 Jan  3 06:40 ld-elf.so.1
    

  3. Launch gdb:

    root@ms11% 
    root@ms11% pwd
    /var/crash
    root@ms11% gdb
    

Sample gdb output is as follows:

GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "mips-marcel-freebsd".
(gdb) set solib-absolute-prefix .        <----- This is needed
(gdb) file mspmand
Reading symbols from mspmand...done.
(gdb) core-file mspmand.core.ms11.0 
warning: exec file is newer than core file.
Core was generated by `mspmand'.
Program terminated with signal 6, Aborted.
Reading symbols from ./usr/lib/libisc.so.2...done.
Loaded symbols for ./usr/lib/libisc.so.2
Reading symbols from ./usr/lib/libjipc.so.1...done.
Loaded symbols for ./usr/lib/libjipc.so.1
Reading symbols from ./usr/lib/libutil.so.5...done.
Loaded symbols for ./usr/lib/libutil.so.5
Reading symbols from ./usr/lib/libc.so.6...done.
Loaded symbols for ./usr/lib/libc.so.6
Reading symbols from ./usr/libexec/ld-elf.so.1...done.
Loaded symbols for ./usr/libexec/ld-elf.so.1
#0  0x4078e460 in __sys_select () at select.S:2
2       select.S: No such file or directory.
        in select.S
Current language:  auto; currently asm
(gdb) bt
#0  0x4078e460 in __sys_select () at select.S:2
#1  0x406c6538 in __pselect (count=9, rfds=0x542020, wfds=0x542120, 
    efds=0x542220, timo=0x0, mask=0x0)
    at ../../../src/lib/libc/gen/pselect.c:79
#2  0x405a9f84 in __evGetNext (opaqueCtx={opaque = 0x9}, opaqueEv=0x77ffdeb0, 
    options=2) at ../../../../src/juniper/lib/libisc2/eventlib.c:782
#3  0x405aac3c in __evMainLoop (opaqueCtx={opaque = 0x542000})
    at ../../../../src/juniper/lib/libisc2/eventlib.c:782
#4  0x00423000 in main (argc=2, argv=0x77ffdf40)
    at ../../../../src/juniper/usr.sbin/mspmand/mspman_main.c:395
(gdb) q

Launching a Process from gdb

The following shows how to launch a process and set a breakpoint:

root@ms11% gdb
GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "mips-marcel-freebsd".
(gdb) set solib-absolute-prefix .
(gdb) file pthr_test
Reading symbols from pthr_test...done.
(gdb) b main
Breakpoint 1 at 0x400eb4: file src/juniper/lib/libmp-sdk/test/pthr_test.c, line 46.
(gdb) run
Starting program: /var/crash/pthr_test 
warning: Unable to get location for thread creation breakpoint: generic error
[New LWP 100244]
[New Thread 0x44a000 (LWP 100244)]
[Switching to Thread 0x44a000 (LWP 100244)]

Breakpoint 1, main (argc=1, argv=0x77ffde04)
    at src/juniper/lib/libmp-sdk/test/pthr_test.c:46
46      src/juniper/lib/libmp-sdk/test/pthr_test.c: No such file or directory.
        in src/juniper/lib/libmp-sdk/test/pthr_test.c
(gdb) c
Continuing.
message from thread 5
message from thread 6
message from thread 7
message from thread 8
message from thread 9
message from thread 2
message from thread 0
message from thread 1
message from thread 3
message from thread 4

Attaching to a Running Process

The following shows how to attach to a running process:

root@ms11% gdb -v
GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "mips-marcel-freebsd".
root@ms11% 
root@ms11% 
root@ms11% ps -aux | grep mspmand
root   144  0.0  0.1  4056  1408  ??  S     7:18PM   0:00.25 /usr/sbin/mspmand 
root   544  0.0  0.1  2900  1140  d0  S+    7:30PM   0:00.06 grep mspmand
root@ms11% cd /var/crash/
root@ms11% gdb
GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "mips-marcel-freebsd".
(gdb) set solib-absolute-prefix .
(gdb) file mspmand
Reading symbols from mspmand...done.
(gdb) attach 144
Attaching to program: /var/crash/mspmand, process 144
Reading symbols from ./usr/lib/libisc.so.2...done.
Loaded symbols for ./usr/lib/libisc.so.2
Reading symbols from ./usr/lib/libjipc.so.1...done.
Loaded symbols for ./usr/lib/libjipc.so.1
Reading symbols from ./usr/lib/libutil.so.5...done.
Loaded symbols for ./usr/lib/libutil.so.5
Reading symbols from ./usr/lib/libc.so.6...done.
Loaded symbols for ./usr/lib/libc.so.6
Reading symbols from ./usr/libexec/ld-elf.so.1...done.
Loaded symbols for ./usr/libexec/ld-elf.so.1
0x4078e3c0 in tcflow (fd=5513216, action=0)
    at ../../../src/lib/libc/gen/termios.c:245
245     ../../../src/lib/libc/gen/termios.c: No such file or directory.
        in ../../../src/lib/libc/gen/termios.c
(gdb) c
Continuing.
exit on signal 2
root@patriots% vty sbr

CSBR platform (266Mhz PPC 603e processor, 128MB memory, 512KB flash)

CSBR0(patriots vty)# pty 5
Connected.

^C
Program received signal SIGINT, Interrupt.
0x4078e3c0 in tcflow (fd=5513216, action=0)
    at ../../../src/lib/libc/gen/termios.c:245
245     in ../../../src/lib/libc/gen/termios.c

At this point, you can set breakpoints for debugging. Note that gdb hits the breakpoint 3 times in the select call; however, once a breakpoint is set and the program execution reaches it, the breakpoint is always triggered.

(gdb) b select
Breakpoint 1 at 0x4078e44c: file select.S, line 2.
(gdb) c
Continuing.

Breakpoint 1, __sys_select () at select.S:2
2       select.S: No such file or directory.
        in select.S
Current language:  auto; currently asm
(gdb) c
Continuing.

Breakpoint 1, __sys_select () at select.S:2
2       in select.S
(gdb) c
Continuing.

Breakpoint 1, __sys_select () at select.S:2
2       in select.S
(gdb) c
Continuing.

Breakpoint 1, __sys_select () at select.S:2
2       in select.S
(gdb) c
Continuing.

Debugging Plugins

The plugin control and data handler both run in the context of control threads and data threads created by the JUNOS main mspmand process. You must perform all active debugging on the PIC.

To debug a plugin using gdb, first attach to mspmand, then select the thread to debug. If a plugin crashes, mspmand generates a core dump containing entire object cache as it was configured in the CLI. You can use gdb commands such as info thread or thread <#> to check the core dump.


2007-2009 Juniper Networks, Inc. All rights reserved. The information contained herein is confidential information of Juniper Networks, Inc., and may not be used, disclosed, distributed, modified, or copied without the prior written consent of Juniper Networks, Inc. in an express license. This information is subject to change by Juniper Networks, Inc. Juniper Networks, the Juniper Networks logo, and JUNOS are registered trademarks of Juniper Networks, Inc. in the United States and other countries. All other trademarks, service marks, registered trademarks, or registered service marks are the property of their respective owners.
Generated on Sun May 30 20:26:47 2010 for Juniper Networks Partner Solution Development Platform JUNOS SDK 10.2R1 by Doxygen 1.4.5