Lester Bird, Distinguished Engineer, Juniper Networks

Exploring the PTX10008 400G Line Rate MACsec 

Demo Drop 400G & 800G
Lester Bird Headshot
Slide from video presentation with diagram titled MACsec Frame Format. Three rows of boxes are connected by dotted lines and labeled Original Frame, MACsec Frame and SecTag

See the Juniper PTX 10008 router in action

Watch as not a single frame is dropped when Juniper Networks’ Distinguished Engineer Lester Bird demonstrates a 400G line rate MACsec using the Juniper PTX10008 high-performance modular chassis.

Show more

You’ll learn

  • How the PTX10008 supports both SONiC and JunOS operating systems

  • Which Juniper network is the engine for the PTX10008

  • How the PTX10008 delivers high-speed link security with no traffic loss

Who is this for?

Network Professionals Security Professionals

Host

Lester Bird Headshot
Lester Bird
Distinguished Engineer, Juniper Networks

Resources

Transcript

00:01 hi everyone i'm lester bird from juniper

00:04 networks

00:05 i'm going to show you 400g line rate

00:07 maxsec on juniper's newest modular

00:10 chassis

00:10 the ptx 1008 here's a standard

00:14 disclaimer

00:15 as always this presentation reflects

00:17 current thought but

00:18 we can change direction in the future we

00:20 will generate 400 g

00:22 line rate traffic across maxsec link

00:24 between two ptx 1008

00:27 one of the ptx's runs juno s evolved

00:29 juniper's modern cloud operating system

00:32 the other pdx 1008 runs sonic an open

00:35 source

00:35 nos driven by the open compute project

00:38 we will show three different frame sizes

00:40 66 bytes 1 000 bytes and 9 000 bytes

00:44 for all frame sizes there will be no

00:46 traffic loss showing that the ptx 1008

00:49 can indeed do line rate max x

00:51 for those of you curious why we picked

00:53 66 bytes instead of 64 bytes

00:55 this was an artifact of the tester for

00:57 some reason it imposed a 66 byte minimum

01:01 here's the test topology there's a

01:02 maxsec link between the ptx 1008 running

01:06 juno s

01:06 evolved and the one running sonic we are

01:09 using ixiad to generate bi-directional

01:11 400 g

01:12 line rate traffic before we get to the

01:14 demo we need to set some common

01:16 background

01:17 and nomenclature there are five

01:19 important statistics to look at

01:21 we defined them here first we have

01:23 efficiency

01:24 efficiency is the percentage of the 400g

01:27 bandwidth used to transfer ethernet

01:28 frames but not

01:29 including mac sac octets the layer 1 bit

01:33 rate

01:33 is the amount of the 400g bandwidth

01:36 measured in gigabits per second used to

01:38 transfer the ethernet

01:40 frames again not including the max at

01:42 octet

01:43 when calculating the layer 1 bit rate we

01:46 consider the ethernet preamble

01:48 the ethernet frame the inner packet gap

01:51 but not the maxec header the

01:53 transmission bit rate as seen in juno s

01:56 is called the juno s bit rate it is

01:58 different than the layer 1 bit rate

02:00 it measures the amount of bandwidth used

02:02 to transmit the ethernet frame

02:04 and the maxec header unlike the layer 1

02:07 bit rate

02:08 it does not include the ethernet

02:10 preamble nor

02:12 the inner packet gap the frame rate is

02:14 just the number of ethernet frames

02:15 transmitted per second

02:17 and the frame loss percentage is the

02:19 percentage of ethernet frames lost

02:21 crossing the maxsec link during

02:23 transmission

02:24 if we are indeed sending and receiving a

02:26 400g line rate

02:28 then the frame loss should always be

02:30 zero percent

02:31 let's take a look at the maxsec frame

02:33 format as you can see

02:35 the original ethernet frame is encrypted

02:37 into a secure payload

02:39 maxec also adds a security tag and

02:42 an integrity check value not shown as

02:44 the ethernet preamble and the inner

02:46 packet gap

02:47 the preamble would precede the max up

02:49 frame the internet packet gap would

02:51 follow it so let's start getting into

02:53 the numbers

02:54 the first thing that we need to

02:55 understand is the different components

02:56 that contribute to the bit rates that

02:58 we're going to look at

02:59 there's an ethernet preamble that is

03:01 eight octets and an inner packet gap

03:03 that is 12 octets

03:05 these 20 bytes represent physical layer

03:07 ethernet overhead

03:08 in our case the max overhead is going to

03:10 be 24 octets

03:11 because we are not using the secure

03:13 channel identifier

03:15 if we were to use the sei the maxec

03:17 overhead would increase to 32 bytes

03:20 as we said before we're going to be

03:21 generating ethernet frames of 66

03:24 1000 and 9000 octets the remaining

03:27 fields are just additive

03:28 combinations of the above information

03:30 for instance

03:31 l1 ethernet is the ethernet frame size

03:34 plus the physical layer overhead of 20

03:36 bytes

03:37 similarly the l2 maxx size is just the

03:40 original ethernet frame size plus the 24

03:43 bytes of mac sec overhead and so forth

03:46 for the mathematically inclined or those

03:48 who don't like magic numbers

03:50 these are the formulas that calculate

03:52 those key statistics

03:54 i'm not going to go over each one of

03:56 them but i encourage you to come back

03:58 and look at these formulas after you

03:59 watch the videos they calculate the

04:01 expected values

04:02 note also the ixia field names of these

04:05 key statistics

04:06 we will be examining tx l1 load

04:09 percentage

04:10 txl1 rate tx frame rate the juno s bit

04:14 rate

04:14 is taken from the show interfaces

04:16 statistics command it is not part of

04:18 ixia

04:19 for convenience i've used the previous

04:20 formulas to calculate the various

04:22 statistics for different frame sizes

04:24 notice how the efficiency increases from

04:27 78 to 97 to 99 percent

04:30 as the packet sizes grow not shockingly

04:33 so do the l1 and juno s bit rates the l1

04:36 bit rate increases from 312 gigabits per

04:39 second

04:40 the 398gb per second the juno s

04:43 bit rate which is calculated differently

04:45 grows from 327 gigabits per second to

04:49 399 gigabits per second that's getting

04:51 pretty close

04:52 to our 400g it is important to note that

04:55 neither

04:56 the l1 bit rate or the juno s bit rate

04:58 will actually hit 400 g

05:00 because of overhead of setting the max

05:02 header or the preamble

05:03 or the inner packet gap we can only get

05:06 close to 400 g

05:07 conversely frame rate drops as the

05:10 packets get

05:10 larger this makes sense since a bigger

05:13 packet takes longer to transmit

05:15 due to factors like clock speeds

05:17 latencies when fetching statistics

05:19 and rounding errors observed values may

05:22 not perfectly match expected values

05:24 the observed values will be close to the

05:26 expected values but they may not be

05:28 exact

05:29 in all cases we utilize the entire 400g

05:32 bandwidth

05:33 it's important to realize this however

05:36 because we do not include all headers

05:38 when calculating bit rates neither the

05:40 l1 bit rate

05:42 nor the juno s bit rate will ever reach

05:45 400 g

05:46 finally efficiencies and bit rates

05:48 improve

05:49 as the frame sizes get bigger frame

05:51 rates however

05:52 decrease as the frame sizes get bigger

05:55 as we discussed previously

05:56 we transmit jumbo frames we see that the

05:58 bit rates get close to 400 g which is

06:01 expected

06:02 more of the bandwidth is being used to

06:04 transmit the original ethernet

06:06 frame less of the bandwidth is dedicated

06:08 to protocol overhead

06:09 like sending the maxsec headers the

06:11 ethernet preamble

06:12 and the inner vacuum gap by far the most

06:15 important thing is to see

06:16 zero frame loss when sending a 400g line

06:19 rate

06:20 if we're sending up line rate we do not

06:22 expect any lost packets

06:25 okay we're finally ready for the demo

06:31 we're currently logged into the system

06:33 running junos evolve if we look at

06:35 et000 we can see that's a 400g

06:38 ethernet link we've also set the mtu to

06:41 jumble frames

06:42 if we look at the maxset configuration

06:44 we can see that we're using a 256-bit

06:46 cipher with extended packet numbering we

06:49 need

06:49 xbn for high-speed links like 400g

06:52 we can also see that et000 is set up to

06:55 do

06:56 maxsec encryption

06:59 by looking at the max connections we can

07:02 see

07:03 that encryption is indeed on and that we

07:05 are not including the secure channel

07:07 identifier

07:09 finally we can see that mka the max key

07:11 agreement

07:12 protocol is running on et000

07:20 flipping to the sonic side we see that

07:21 ethernet one maps to et000

07:25 it's a 400g link with jumbo frames

07:27 configured

07:29 juniper is one of the first sonic

07:31 vendors to ever support maxsac

07:33 for mka we created a new max docker

07:36 container built around open source wpa

07:42 supplicant

07:45 when we look at the maxset configuration

07:47 on the sonic system

07:48 we can see that cipher suite number

07:51 three indicates

07:52 256 bit xbn

08:02 we also note that sei is not included

08:05 here as well just as it wasn't on the

08:07 evo side

08:10 if we execute cli we can see

08:13 that these parameters have indeed taken

08:15 effect

08:17 and they do match the juno s evolve

08:19 configuration

08:21 we can also see that mka packets are

08:23 being exchanged with the genos

08:25 side

08:32 we're now on the ixia 400g tester i've

08:35 already set up

08:36 line rate traffic generation with a

08:39 fixed size

08:40 frame of 66 bytes

08:49 as you can see from the flashing icon

08:51 the frame loss is zero percent

08:55 the l1 bit rate is 312 gigabits per

08:58 second

09:01 the efficiency is 78

09:05 and the frame rate is 454 million

09:08 packets per second

09:11 if we go to the evo side and look at the

09:12 maxx statistics we can see that the

09:14 number of encrypted frames jumps from

09:16 222 to 241 billion frames

09:33 we can also look at the genome os bit

09:35 bitrate which we can see

09:37 is 327 gigabits per second

09:42 the data for 66 byte frames was as

09:44 expected this is actually the hardest

09:46 test

09:46 because it has the greatest protocol

09:48 overhead let's try

09:50 a larger frame size 1000 bytes note that

09:53 we are still running at 400g

09:54 line rate and the frame size is 1000

09:57 bytes

10:00 we start the traffic

10:09 as expected the frame loss is still zero

10:12 percent

10:14 the l1 bit rate has now jumped to 390

10:16 gigabits per second

10:19 the efficiency has jumped to 97

10:23 and the frame rate has dropped to 47

10:26 million packets per second

10:29 if we look at the juno s bit rate on the

10:31 evo side

10:32 we see that it has jumped from 327 to

10:36 392 gigabits per second

10:42 finally we change the ethernet frame

10:44 size to 9000 bytes to test jumble frames

10:50 as you can see we are still running at

10:51 400g line rate

10:54 and we're generating 9000 by ethernet

10:56 frames

10:58 starting the traffic we check on the

11:02 frame loss

11:05 and happily it's still at zero percent

11:07 loss

11:11 the l1 bit rate is now nearly 399

11:15 gigabits per second

11:18 the efficiency is now 99.7 percent

11:22 and the frame rate has dropped to 5

11:24 million frames per second

11:27 if we check the judo s bit rate on the

11:29 evo we see that it is now

11:31 399 gigabits per second as well from

11:34 this screenshot we can see how the junos

11:36 bit rate

11:37 increases as the packet size increases

11:40 this is also true

11:41 of the layer 1 bit rate if we track the

11:44 ixia data

11:45 hopefully you notice the most important

11:46 piece of information at all three frame

11:48 sizes ranging from 66 bytes to 9000

11:51 bytes

11:52 the ptx 1008 did not drop a single frame

11:55 when doing max sec there you go the bt

11:58 asic is the high performance networking

12:00 engine that drives the ptx 1008 modular

12:03 chassis

12:04 maxsec is integrated natively into the

12:06 a6 unlike many competitors

12:09 juniper maxsec does not utilize external

12:11 phy devices

12:12 the bta6 can perform inline maxsec for

12:15 all packet sizes at 400g

12:18 line rate if maxec is not needed bt can

12:20 turn off the maxep block to reduce power

12:22 consumption

12:23 finally juniper maxx supports several

12:25 ciphers including

12:27 gcm aes xpn 128 and gcm

12:30 aes xbn 256 as you saw in this demo we

12:34 did the xbn 256

12:36 cipher say hello to juniper's ptx 1008 a

12:39 system that can meet

12:40 all your 400g high-speed networking

12:43 needs

12:44 the hardest system is the bta6 the pdx

12:46 1000a can do 400g

12:48 line rate max sec as you've seen you get

12:51 high speed link security without

12:53 traffic loss and finally you can pick

12:55 your nos you know s volunteer sonic

12:57 the ptx 1008 supports both operating

13:00 systems for maximum flexibility

13:02 here are some references for the ptx

13:04 1000a and

13:06 it's big brother the ptx 10016 i refer

13:09 you to the product data sheet

13:10 we also have some ocp videos that we did

13:13 showing sonic running on the ptx 1008 i

13:16 think you'll like those too

13:18 please check out the ptx 1008 and enjoy

13:21 the full power

13:22 of 400g stay healthy and stay safe take

13:25 care

13:26 bye

Show more