Sterling Perrin, Senior Principal Analyst, Heavy Reading

Sustainable WAN transformation with 800GE Routing

400G & 800GWAN
Sterling Perrin

Sustainable WAN transformation with 800GE Routing

Network operators face a dilemma — how to support rapid traffic increases while feeling the pinch of single-digit revenue growth in a saturated market. 400GE speeds have crossed the chasm and eased some of the pain, but 800GE is already necessary in many WAN transport domains. As operators evolve their networks to forward increasing traffic volumes, power consumption—and its associated costs and carbon emissions—can quickly spiral out of control.

Show more

You’ll learn

  • How to deliver performance and scale needed to meet the challenges of the AI era

  • How to reduce carbon footprint with multi-layer sustainable efficiencies (silicon, systems, software)

  • How to enhance 800GE operations for an exceptional experience.

Who is this for?

Network Professionals Business Leaders


Sterling Perrin
Sterling Perrin
Senior Principal Analyst, Heavy Reading
Amit Bhardwaj Headshot
Amit Bhardwaj
VP of Product Management, Juniper Automated WAN, Juniper
Dwayne McIntosh Headshot
Dwayne McIntosh
Director of Product Marketing, Automated WAN, Juniper


0:02 hello everyone and thank you for attending today's webinar sustainable W transformation with 800 G routing

0:09 sponsored by Juniper before we begin I will cover a few housekeeping items on

0:14 the left hand side of your screen is the Q&A if you have any questions during the webcast please type your question into

0:20 the Q&A box and submit your questions to our speakers all questions will be saved so if we don't get to answer you we may

0:27 follow up via email at the bottom of your audience conses are are multiple application widgets you can use if you

0:34 have any technical difficulties please click on the yellow help widget here you can find answers to Comon questions a

0:41 copy of today's slide deck is available for downloading the blue resourcist widget me at the end of today's

0:46 presentation please take one minute to complete the survey that's open on your screen your feedback is extremely

0:52 helpful an on demand version of the webcast will be available about one day after the and can be accessed using the

0:58 same audience link that was sent to you earlier today I would now like to turn the event over to Sterling pering senior

1:05 principal analyst heavy reading Sterling thank you Barbara and hello

1:11 welcome everybody to sustainable Wan transformation with 800 gigy routing sponsored by Juniper uh my name is

1:18 Sterling parin and I will be the the host and moderator for today's webinar and I'm joined by two expert speakers

1:25 from juniper to uh walk us through today's content I'll introduce them here of course you'll introdu you'll hear

1:31 from them quite a bit as we go through uh first uh Amit bardage VP product

1:37 management with the automated Wan group within Juniper hi admit welcome hello

1:43 everyone uh thanks for joining today looking forward to your talk and

1:48 uh Amit will be helped by Dwayne McIntosh director of product marketing also in the automated W group with

1:55 juniper hi Dwayne welcome hi good morning thank you very much for having us

2:00 yep looking forward to this uh for everybody in the audience this is a flow

2:05 of the agenda market trends and drivers um Dwayne and I will will'll cover that

2:10 and then we'll move over to emit for most of the remaining sections silicon systems requirements converged uh

2:17 Optical routing looking at uh kind of the Renaissance of Ip over dwdm um on the software side Network

2:26 Automation and programmability we'll have some conclusions and then we should

2:31 have hopefully about 10 minutes uh for Q&A at the end so please do ask questions as we go through we'll gather

2:38 them up and and hit as many as we can um live before we close

2:43 out so let me start and set the context um one two three four four slides in so

2:51 AI we we need to talk about AI uh clearly a huge driver for a lot of

2:57 what's going on in the communications industry right now so um the AI

3:02 Revolution was seemingly launched with with the Open ai's chat GPT which kind of took

3:10 over the world in 2020 um but it really did begin a couple of years prior to that with the introduction of

3:16 Transformer models uh which is what chat GPT and and many others are based GPT is

3:22 generative pre-trained Transformer so that's the te in any of these models that we're seeing that have te and the

3:29 name are Transformer um they enable the natural language processing that we're

3:34 seeing and um all of the explosive growth that we're seeing in AI um from an analyst or

3:42 from an industry analyst perspective looking at communications um the ins and outs of how these models work themselves

3:48 is well beyond my expertise But but so we're looking at well how do these affect the network and it really starts

3:55 with the computational demands that these new um models are are placing on on compute

4:04 and that's what the the the leftand um chart shows this comes from Nvidia um

4:10 Jensen Wang just presented an updated version of this at their their conference uh in March but um this this

4:17 pretty much shows that Trend if you look on that that left line was kind of the

4:22 trajectory AI was on how much um training compute was required and it was

4:28 you know growing at about a 45 degre angle there so it's a pretty steep growth it was um uh training commute

4:35 demands in this case were increasing 8X every two two years so that's pretty significant but the real explosion was

4:42 again these Transformer models including chat GPT which is that purple line which is growing almost almost uh uh

4:50 vertically and that um is really the AI Evolution that we're talking about right

4:56 now the requirements increase 275x in in two years time so just incredible

5:03 demands on on the network so what does that mean it's it's driving massive compute and Jensen Wang talked about

5:09 this in in his keynote um xpu units whether they're gpus uh tpus the CH

5:16 tensor processing units or CPUs a massive compute and then the

5:21 interconnect for all of that and the interconnect includes um within the chip itself um

5:29 chipto chip within the rack uh connectivity connecting racks and then connectivity in the data center that's

5:36 connecting these AI clusters together um we're seeing a massive growth and interest hyperscalers ramping up in

5:44 their connectivity between the racks and the Clusters moving to 800 gig in 2024

5:50 immediate demand for that and then at conferences um around the world we're

5:55 we're certainly seeing a lot of interest in R&D going um towards a path to 1.6t

6:01 and even 3.2t so there cannot be enough connectivity uh between these these um

6:08 systems and networks within the data center itself um connectivity in between

6:14 the racks and clusters it's largely Optics based um and the focus there's a

6:19 very strong Focus here on on lowest power consumption within the data center so for folks that were at ofc um all of

6:27 the talk about linear drive pluggable off itics lpo and co-packaged Optics or CPO this is all aimed at driving

6:35 capacity and and reducing power as much as possible that's all with within the data center

6:40 itself um so what what what happens uh in the wide area network which is really the focus of what what we're going to

6:46 talk about today we know that traffic within the data center drives traffic between the data centers this is the

6:53 data center interconnect opportunity it's it's been growing very strongly for for many years and we know that when we

7:00 move to 800 gig for data center interconnect that 800 gig uh that that

7:06 um uh DCI is going to be the the number one driver for moving to 800 gig in the

7:11 wide area network the chart on the right here comes from omia uh also I should

7:17 have said the chart uh on the right on the last slide is also from my colleagues at Amia who are graciously

7:22 letting me share some of their data uh from forecast so this particular one

7:28 here shows high-speed coherent pluggable shipments through through 20 uh 28 so

7:34 this is share of coherent pluga bles um and so so what the difference between

7:39 the data center and between the data centers is the move to coherent plugable optics for the distance these are many

7:45 many kilometers whereas data center side we're talking about uh tens of meters hundreds of meters Etc uh

7:52 so while 400 gig in data center interconnect is going to be the um

7:58 primary um interface and and is currently what operators are standardizing on 800 gig

8:04 is going to grow sharply and so as a share of total highspeed units we're going to see a sharp increase of 800 gig

8:10 share through 2028 um again 400 gig is going to continue to grow in terms of its units but 800 gig is is a strong

8:19 um strong update even through 2028 uh DCI is is as I said the number one

8:24 driver it's big one but it's not just about uh DCI we're also going to see 800

8:29 gig adopted in Long Haul and core networks um these are also standardizing on 400 gig today um but operators in our

8:37 surveys and heavy reading surveys tell us that they want and need a path to 800 gig in the future one of the aspects of

8:45 of the coherent pluga bles coming out that support 800 gig that may be overlooked is is that they can also run

8:52 lower rates um other than 800 gig um particularly 400 gig and for these Long

8:59 Haul types of applications there's an interest in in dialing down the modulation so moving to a lower order

9:05 modulation over these newest generation pluga bles running them at 400 gig to get you basically 400 gig anywhere any

9:12 terrestrial network uh really in the world and the ability to even do a lot of subc applications um the the other uh

9:20 application or use case I'll call out andit will talk in detail about all of these but um router to router peering uh

9:27 within the data center so this is a Wan application but the actual um

9:33 implementation of this is is connecting routers within the data center to to Pi basically ISP to ISP and we're going to

9:40 see a migration to 800 gig for that as well uh so those will be routers which

9:45 we'll talk a lot about today with with an 800 gig interface on them uh Beyond scale uh so scale in

9:54 capacity is critical as as we've just talked about um but there's a number of

9:59 other priorities as operators look at their systems and as they look at on a network basis reducing power consumption

10:06 is is critical the lowest watts per bit is increasingly a crucial metric across

10:12 all service provider types there's a very uh acute issue within the data center that's driving this lpo and CPO

10:20 deployments but even in the wide area with coherent Optics being used there's a strong focus on on reducing power

10:28 within the Optics and with within those systems it relates to sustainability which has been a topic it also is key

10:34 for profitability um because power is a is a huge component of overall Opex within an

10:40 operator and as we look at Telecom uh specifically we're we're kind of in a in

10:45 a tough space between the early adoptions en roll outs at 5G and then coming 6G where there's going to be a

10:52 very strong focus on on reducing costs for the network so huge driver uh which gets me to the next points on capex and

10:59 Opex um very strong Focus any architectures that help Drive Lower Opex

11:04 and capex are going to be very well received and needed by Telecom operators specifically over the next couple of

11:11 years um IP over dwdm is an architecture that does exactly that this is the

11:16 integration of Optics on routers it started uh with 400 gig for a number of

11:22 reasons we've talked about on past webinars but it cannot only be relegated to 400 gig operators need to know that

11:29 IP over dwdm is also going to be available at 800 gig as well as Beyond

11:35 um the chart on the Right comes from a heavy reading survey we did last year of of network operators globally and

11:41 primarily Telecom operators in this case and I put it here just to show that operators really want to move to this

11:48 type of architecture um the top one here so over the next three years 65% of

11:53 these operators saying they want to move to coherent pluggable Optics on switches and routers that's IP overd wdm and it's

12:00 actually stronger adoption than the coherent pluga bles that they're looking at for any other uh type of network

12:05 element so very real um Trend one operator at ofc just recently told me

12:11 that I a very large operator said IP over dwdm is is inevitable for their Network um other points I'll make and

12:18 then they bring in Dwayne to kind of uh expand on some of the key trends uh but

12:24 I do want to talk about automation again we're talking about you know Network level so on the software side machine

12:29 learning as well as AI itself is going to help network operators uh reduce their their Opex specifically so a lot

12:37 of interest in that coming through in our surveys uh as well as security and reliability uh not just important but as

12:43 we survey operators one year to the next the requirements for security and

12:48 reliability are actually becoming higher uh from year to year so there's a lot of asks and requirements capacity but in a

12:55 way that's done to reduce capex Opex increase security reliability it's it's

13:01 uh quite a set of requirements folks like Juniper are building the networks that are going to uh to make that happen

13:07 so with that let me um actually before we bring in Dwayne let me let me uh go

13:12 to our first poll question and then I'll bring in Dwayne to expand on on some of the um the trends so you know we we

13:18 talked about um adoption of 800 gig in the wide area it's it's brand new

13:26 this stuff will start to ship this year curious for you as the audience what uh either your plans are

13:33 if you're a network operator or what your customer's plans are if you are selling into um service providers and

13:40 we've got a number of options here um just pick the one that fits best and

13:46 I'll push them out once we get to a critical mass deploying this year so as soon as available deployment are

13:53 planning deployments next year 2025 kind of would be what we see in our forecast

13:59 initial ramps over the next two to three years um or are you in a position where

14:04 you're evaluating but uh no plans currently or

14:10 um if 800 gig is kind of high in the sky for for you um not under consideration

14:16 at all so those are the options um let me just wait a second

14:23 here and let me push and then I may update but let me give it this push here

14:30 and uh Dwayne let me bring you to comment on on what we're seeing

14:38 um sure yeah right I I think this this makes a lot of sense um we we certainly

14:45 are seeing customers today who have 800 gig requirements core pering data C

14:52 interconnect certainly um data center implementations where um our routers are

14:59 being used in close architectures for spine and leaf roles in the data center for AI um and and I think from from

15:08 majority of accounts um they would at least say they want to Future proof

15:14 their their Acquisitions their purchases to support uh 800 G uh because they know

15:20 it's an eventuality so uh I think we certainly see about a third of the

15:25 accounts deploying today in whatever capacity uh planning next year and evaluating the in Gig technology as well

15:33 and then those that are planning to migrate within the next two to three years or make sure that their purchases

15:38 will support that transition make sense yeah yeah absolutely future proof is definitely a word we're hearing a lot

15:45 let me um push this along and um I'll let you you go from here Dwayne I'll

15:52 just one comment DNE before you start I think the surve is exactly what we are hearing uh from the customers in many

15:58 one-on-one conversations especially as it gets to some of the hypers skill use cases like core DCI um you know even you

16:07 know aggregation in some cases so with that Dwayne to you uh very encouraging

16:12 numbers from that survey all right so to to reinforce that

16:18 and it's there are a number of diverse factors that are influencing you know Rand Transformations towards 800 gig you

16:26 know of course we all know that year-over year growth for internet traffic is huge that's dominated by

16:33 video content with higher and higher definition type content uh dominating

16:38 like 80% of the traffic uh uh generally today um managed Services is e even

16:44 Rising uh with SD sdwan uh sassy managed service offerings are growing

16:51 substantially at a 32% keger um and then we see a huge market growth in data

16:57 center interconnect um and then not withstanding all of that of course uh is AI traffic explosion so

17:05 for AI data center deployments for back-end systems as well as front-end

17:11 inference uh deployments uh huge growth in model development there where we see

17:18 you know 4X related traffic growth like every two years so um uh across the WAN

17:25 as well as within the data center you know we see 800 gig requirements are very strong today for implementations

17:33 today or future proofing uh their networks to support it in the next year

17:38 two years as they migrate so it's it's really crucial to

17:45 have 800 gig capabilities uh to alleviate these Network pressure points

17:53 um traditionally we talk about core peering as being some of the first are

17:59 is to increase throughput efficiency requirements uh we talk about data

18:04 center interconnect becoming uh again a dominant uh use case where 800 gig is

18:11 required but as you can see here um we're hearing requirements or future

18:16 proof requirements for uh 800 gig requirements in in Access and

18:22 aggregation like in the cloud Metro and in Edge networks for multi-service Edge

18:27 functionality as as well of course as in data center fabric where spine Leaf architectures close architectures uh

18:34 that are implemented are getting you know higher and higher demands from the higher processor gpus like from Nvidia

18:41 and the like so you know at the end of end of the day there's really three main

18:47 pillars of of requirements that we see in these environments when you support 800 gig clearly performance right

18:54 adapting uh to 800 gig bandwidth expansion you know as required so

19:00 accounts don't do this overnight it's incremental um so they have to support

19:05 their legacy 100 400 gig interfaces and continue to grow uh so it can be done

19:12 organically as the demand occurs as you mentioned sustainability sustainability

19:17 is really key um uh sustainable efficiencies from your silicon from your

19:23 systems from how you do Automation and engineer your network on the fly with

19:28 say sdn it's critical that power efficiency space efficiency your carbon

19:34 footprint can be lowered and then uh automation so uh security uh operating

19:41 your environment efficiently or or efficiently at 800 gig speeds is no

19:48 small Endeavor so as we Implement maxc encryption as we automate these

19:54 environments you know um we want to make sure that the functionality integr

19:59 uh easily into your existing operations environment and we can enhance that with service

20:06 visibility so uh our our blueprint for this wayin transformation then kind of

20:12 Falls in these three pillars so from a from an automation perspective um

20:17 Juniper has Paragon automation um we have capabilities to look at the service

20:23 level so we can see what the service experience is from the customer so they

20:28 uh we focus on their experience and are they getting their service levels that are required open management apis and

20:36 Protocols are critical and that these are driven by open um outcomes-based uh

20:43 type models so whether it's net comp Yang um uh Telemetry uh we want to be

20:50 able to fit into these environments in a standard open way from a performance perspective of

20:56 course support for all the use cases we just discussed as they evolve in fixed

21:02 in modular form factors our Express 5 A6 again as is um a top performer s

21:10 nanometer technology we'll get into here in a second but that really enables us to support 800 gig and Beyond and and

21:18 provide the efficiency that we're looking for in these environments with coherent or non-coherent Optics and then

21:26 the last Point here is on sustainability we really look at sustainability from a multi-layer efficiency perspective uh

21:33 there's sustainability you gain within your silicon uh that you develop we have custom silicon with Express 5 um our

21:41 systems are energy efficient chassis they're designed to be as energy efficient as possible and then

21:48 operationally we support green Traffic engineering functions like with sdn to

21:53 uh uh lower our cost overall so um this is basically an overview of our

21:59 blueprint um I'm going to hand it over to Omit to drill down a little further

22:04 on this thanks oops I clicked one more uh thank you Dwayne in stalling for talking about

22:11 the big trends uh I'm going to go the other way I'm going to start with the

22:16 basics okay and then we'll talk about the more complex things we go to Silicon

22:21 we'll go to systems and we'll talk about security and automation so let's go to the fundamental thing I mean many of you

22:28 said you are going to do 800 gig uh very soon but there's one thing which is

22:35 really really important is 800 gig I say 800 gig and then I'll go to 800 gig 800 gig itself brings a lot of new things to

22:42 the table uh let's start with 800 gig so what does 800 gig bring to table for

22:47 sure it brings a 800 gig Port so that means instead of doing 2x 400 gig e you can do single port of 800 gig and reduce

22:55 the amount of Link aggregation and get more efficiency in the network but 800 gig like stalling was also mentioning

23:01 earlier brings IP over dwdm to the table you know I remember when 400 gig IP over

23:07 dwdm started to get mainstream uh many customers said will it happen in the 00

23:13 generation will it happen in the 1.60 generation the answer is yes it's absolutely happening in eern Gig and we

23:20 believe it will happen in 1.60 and 3.2t also so IP over dwdm is the one of the

23:25 most sustainable trends that will continue uh then AIML is driving

23:31 adoption of 800 gig in the data centers right the Nyx the the the nyck from the

23:38 AI clusters uh and then all the architectures behind that is going to get optimized with 00 gig over next 3

23:44 four years which means these clusters can't just sit in one single data center they have to span across data centers

23:50 and that'll drive the demand for 800 gig in the DCI for to go across the data centers also and and over time that goes

23:58 into to the core Network you know into the uh puring networks and it percolates everywhere you know from a technology

24:05 standpoint the second thing as we go to 400 gig 800 gig actually brings a lot

24:11 more when we talk about 400 gig because number one it doubles the 400 gig density right so every 800 gig Port can

24:18 be used as a 2X 400 gig and this 2x 400 gig is not just a breakout this 2x 400

24:25 gig in this case is using dual LC connectors so you can get uh you know two 400 gig literal ports from every 800

24:32 gig slot in there uh we talk about IPO dwdm again as we go to for gig with

24:39 these same ZR Optics you can go do Long Haul and Ultra Long Haul use cases at

24:44 you know 50% of power capacity power power

24:49 efficiency so lots to bring in the table for 400 gig and similarly on the 100 Gig

24:55 it doubles the density but at a much better powerful print right so lot of things that 8 G giggy brings to the

25:01 table and we believe that the customers will use these for all these different speeds and feeds and we did this

25:08 showcase this in osc this year where we had PTX 10,002 with 100 Gig 400 gig 2x

25:15 400 gig and 800 gig and 800 G ZR ports all running at the same time in in the

25:21 in the 10,002 router so this is real um next thing I want to talk about

25:27 from the ports is go to the silicon right uh before I go to Silicon a brief commentary why silicon is important you

25:34 know there was a time I would say before Co when people started to talk about

25:39 good enough right but really what happened was good enough was pretty poor because people don't think about good

25:46 enough good enough means you know just do the basic stuff but the reality is you know the Silicon is becoming more

25:52 and more important not just to networking across the board you look across the board right Apple's doing

25:57 their own silicon uh all the AI is driven discussions are driven through silicon Fest networking

26:04 is the same we believe silicon plays a huge part here from optimizing the

26:10 networks for the today and for the future and when we go to the 800 Gig Generation uh the first question is okay

26:17 what does the Silicon should look like right and we talk about silicon first thing we talk about is the througho uh

26:22 what is the right radics for the Silicon if you're doing 800 gig ports and the

26:27 Magical number tends to be okay can I do 36 ports on the chassis line card can I

26:33 do 36 ports on the fixed modular form factor you multiply 36 by 800 the

26:38 magical number is 28.8 terabit silic right um and using some of the chip

26:44 technology here like you see in this picture there are two chiplets here which are connected with short reach series and they basically be two

26:50 chiplets behave like a single piece of silicon exactly what you heard from some of the AML presentations of FL um

26:57 cutting techn te ology from a silicon perspective and we can take these two chiplets in from from a fixed form

27:03 factor for example and into the chassis line cords and we can put these two two chiplets onto the line card and do 28.8

27:09 terabit line line card so that drives really the power efficiency of the systems from that perspective and second

27:16 no limitations no limitations means that you're doing 28.8 every port every port 800 gig Port

27:23 should be capable of doing 8 by 100 2x 400 and and one 1 by 800 so that's

27:29 important from a silicon perspective and not just do that but do that in a more secure way which means

27:35 Max SEC at line rate for all speeds and feeds and all ports and once you do that

27:40 you know you can you can start to enable some of the hyperscale use cases so we talk about hyperscale use cases like all

27:47 the use cases mentioned there core puring DCI DCH aggregation EI Network these use cases will continue to drive

27:54 the growth in the bandwidth for you know many many years but what is critical to

27:59 address these use cases also is some of the capabilities the Silicon needs to have right for example uh from a scale

28:07 perspective can I support the FIB for V4 V6 scale not just for today for but for

28:13 next 10 years right because in the puring networks the V4 V6 just continues to grow so stateof the art now on this

28:20 silicon is about 20 million plus FIB for the core use case can I do the number of tunnels that's required in the core you

28:26 know can I do 100K plus tunnels for example right um so that's important from the Silicon capability perspective

28:33 um then we start to go into some of the other hyperscale use cases you know like aggregation where um um multicast is

28:41 important so protocols like beer will will simplify multicast right uh heral

28:46 Qs in is becoming important not just in the aggregation but we're also starting to see in the DC use cases where

28:53 customers want more than 8q as when it comes to DCI applications

28:59 um visibility into the networks right uh the Telemetry in line the memory can be

29:04 fungible so you can do scale for different applications depending on what the resource requirements are uh

29:11 security for things like puring uh and scale in general saying the scale uh

29:17 it's not it's not just about driving the scale from filters from ACLS from sampling rates but also doing that

29:23 without a performance hit and we've done sometimes in the past you know some

29:28 benchmarks with other things and realize that it's not easy doing scale and scale

29:35 without a performance it is something we we pride at Juniper I think this is an important part to look at when we look

29:40 at your solic the Silicon that you want to build in your networks or deploy in your networks today and the

29:46 future so that brings me to the systems right you get your silicon and you try

29:54 to understand what the Silicon can do and then we start to build the systems and in this case you know the it's about

30:00 the fixed form factors and the chassis based systems um and you drive the

30:06 industry leading through pent efficiency because the state of the art now because of the face plate is about 20 36 ports

30:12 both in the fixed form factor as well as the chassis based systems uh having that

30:19 single chip really really drives the efficiency from a power and space perspective and no limitations across

30:26 the board not just for the client Optics but also for the Cent Optics so every port is capable of doing 800 gig IP over

30:35 dwdm as we go to 800 gig there is also 400 gig which is deployed today and

30:42 still being deployed and 400 gig will be deployed for next 10 years it's never going to go away so what's important is

30:49 that there is backwards compatibility for example if you're putting a line card 800 gig line card on chassis it

30:55 should be backwards compatible with the current 400 gig line card it should be backwards compatible

31:00 without changing the fabric and with the new Fabrics so you get the full flexibility how you introduce 8 gig into

31:07 your network and finally you know with these big systems it's not just the Silicon it's also the control plane so

31:13 the combination of the silicon and the control plane drives the scale scale you know for the routes for the convergence

31:19 for the sampling and all the things you know we talked about on on the previous slide so now let me take a step back or

31:28 take step higher and go to the use cases right uh and now we start to look at the

31:34 hyperscale use cases and in these hyperscale use cases there are certain things that are important uh for example

31:42 in the core uh it's important you have scale scale from a a BP perspective

31:48 scale from a tunnel perspective the scale that's makes your core Network fully secure uh take another example of

31:55 puring uh here we talking about V4 V6 scale uh we talking of fact that you

32:01 know can you do DOS uh filtering at scale uh without a performance hit uh we

32:07 go to the AI networking here the large radex devices have started to become

32:12 really important because you can really take the large redex chassis devices in

32:17 the spine uh role in this AI ml clusters and reduce the five-stage class to a

32:22 three-stage class and the value for that is you know you start to eliminate actually of Optics from the uh from the

32:29 state from the five as you go from the five stage to St three stage and also it drives a better power footprint because

32:35 now you using much fewer devices in the network and and we talk keep keep going

32:42 on and on on the other use cases like DCI DC Edge Where and Metro aggregation

32:47 where VPN start to become important because you have multiple customers in this scenario and isolating those

32:54 customers in a multi- tency situation in the data center but same thing in the Metro aggregation with L2 L3 vpns uh

33:01 evpn and advanced qos capabilities with heral qos so when you get a silicon that

33:09 drives all these use cases for the hypers scale I believe this is really really uh useful for the customers

33:16 because you could take one Tool uh and use it in many different use cases uh because that's less certification less

33:23 qualification and also uh driving you know advantages from a sparing

33:30 perspective so with that St let me pass it to you for one more survey

33:35 question yes thank you so this one we wanted to get into

33:42 the use cases so AIT great job going through the different use cases that are kind of on the table as operators move

33:47 to to 800 gig so we want to hear from you uh the audience on which use cases

33:55 are the top requirements for for 800 and in this one it's actually uh choose

34:01 all that applies so you can you can select hopefully you can select multiple um and he's he's kind of walked through

34:07 them all but core um peering data center interconnect

34:12 or uh or data center edge but the things around connecting data centers corn Edge

34:17 um Metro all all the data center interconnect options there Metro aggregation um or uh AI which is the

34:25 intra data center um um uh application and let

34:31 me give it a moment U

34:36 maybe AIT I'll put you on the spot here as we wait them out um what would your

34:43 top two be what do you think are the top two that are going to come in and then I I'll push out and see see how it goes

34:51 top two are going to come in as Gore and DCI and three AI networking clusters all

35:00 right so you gave three let me push him here we're about 40% of the audience give it a shot core DCI so yeah

35:09 so DCI and all its forms Top by Fairmount then core and then for this

35:15 audience the AI uh a bit less although this is a Wan webinar so take that uh

35:22 but yeah any any other thoughts or comments on this uh very interesting to see the met

35:27 aggregation numbers uh that's like almost a third of people think Metro agregation would be one of the

35:33 interesting use cases and I agree because as we look at the densification uh of metros with the 5G um yes this and

35:42 5G becoming more virtual this starts to make sense because now not just doing

35:48 aggregation it's basically a data center sitting in the Metro yeah good good point admit I mean

35:53 I I kind of glossed over Metro aggregation myself I was a bit you know thinking that a bit longer term but

35:58 maybe these results are saying I should uh look at that a bit more seriously in the future so yeah always good stuff

36:04 from these polls let me uh push to the next one in the interest of time and I believe you're you're still up right yes

36:11 I'm still up thank you okay so like we said uh as we go to the 800 Gig

36:18 Generation and we did some numbers on our site what is the value going to be

36:23 from a sustainability standpoint and and all these numbers are well wetted out by

36:30 by Juniper with our products um for example clearly I mean it's simple you

36:35 you double the density in the same space you get 50% uh uh space efficiency with

36:40 800 gake Solutions um for the Silicon uh we going to see at least 49% Improvement

36:47 of wats perur gake but silicon is only just one part of this because as you get this efficiency on the Silicon the other

36:54 things that take power uh you know like the fans and all that stuff and those fans start to become more efficient as

37:01 the Silicon power goes down so we're starting to see for example on a PTX 102

37:06 uh driving 75% power efficiency as compared to the previous generation and we'll start to see the

37:13 same thing on the model form factors also we don't have the numbers for model form factors yet uh we're still

37:19 measuring them but just from a Asic perspective we know it drives 77%

37:24 efficiency when you take all the U silicon and forwarding silicon as well as the Silicon that goes into the fabric

37:31 but that's not all we can take all this stuff because we have we we have these

37:37 platforms which are capable of doing a lot but we can also turn off a lot of stuff which is not being used and we can

37:44 also optimize the network based on the op how the network is being used and drive even more savings with optim cap

37:52 capacity optimization with Jer Paragon driving 27% on top of everything that's on the left

37:58 so I do believe we are moving to a world where things start to get more

38:03 sustainable and we're going to use a lot of automation uh tool tooling to make it even

38:09 better um one of the Technologies my favorites I know it's one of Sterling favorite to is IP over dwdm you see

38:18 different naming for this we at Juniper call this C converged Optical routing

38:23 architecture what this brings to the table as compared to the previous generation is that used to be it still

38:28 is there there is routing and there is transponders right these are two different uh things like ships in the

38:35 night and we've been trying to do multi-layer for so long but it's really complex to do when you have IP layer

38:43 which is actually quite uh you know standards based but you have the optical layer which is actually quite

38:48 proprietary implementation from all the optical vendors as we go to IP over dwdm

38:53 but every routing Port is capable of supporting dwdm this wasn't the case in the past but it's definitely the case in

38:59 400 Gig Generation it's the case in the 800 gig and will be the same in 1.6t and and future this is where everything is

39:06 heading from a IPO or dwdm perspective so as we get there you know we start to

39:11 merge dwdm onto the routers we eliminate one control plane completely we got so

39:17 much you know Simplicity now in the network we also start to make things more efficient because it used to be

39:23 that you had uh extra capacity being provision on the optical side extra capacity on the routing side now we can

39:31 just do the planning at a single layer and really really optimize things across the network and some of the numbers at

39:36 the bottom that you can see with did these numbers uh taking a transponder in

39:42 a outer and putting uh optic DWD optic on router we start to see 54% the power

39:49 savings 77% space savings and 55% carbon footprint savings I think the value

39:54 proposition is pretty significant and let me just demystify this in a very

39:59 very quick way like what does 800 gig IP over dwdm brings to the table a lot a

40:05 lot right now because literally you can use this 800 gig ZR Optics and build any terrestrial Network without using any

40:12 transponder so we got a there's some reaches I mentioned here like 800 gig for Metro applications going up to 500

40:21 kilometers uh you can as then you can start trading speed with distance you want to go longer like Long Haul orinal

40:28 1,000 kilometers you can go to 600 gig uh with the same optic and you can go to ultra Long Haul applications at 400 gig

40:36 exactly with the same optic so now you can use the same optic same routing platforms and and deploy any terrestrial

40:44 network with IP over dwdm in in the 800

40:50 generation and that brings me to take a step by let's just treat the whole rter

40:57 as a black box what does that mean and the question really is when you have a black box how do you automate that black

41:03 box my favorite example sometimes is Tesla which is you know we have a car now which drives Telemetry and and

41:10 everything can be programmed on the car shouting is basically the same everything can be programmed right

41:15 because we provide those we build that stuff in the hardware for programming and everything

41:22 can be all the data can be streamed out from the routing platforms uh so this is

41:27 this is the commitment we have from a from a standardization perspective it's good to have standardized models uh and

41:34 this is why we are continue to invest in open config but we also know that the standard models they getting better but

41:42 they're not complete yet so we have native data models and the standard-based models uh to to drive uh

41:50 the the programmability of the platforms and then we look at the from a Telemetry

41:55 perspective uh we got to use some of the standard uh you know Services uh grpc services so all the Telemetry can be

42:02 ingested by you know a paragon automation or could be open-source collectors or you know customers own own

42:09 collectors for collecting the data from the network and we just launched uh

42:15 things like Telemetry Explorer and a Yang data model Explorer where as the code is written uh uh for uh providing

42:24 the Telemetry and the young young models it can be exported to a a online tool

42:31 where customers the programmers can look at this tool and see what is available and in what release from a Telemetry and

42:38 young models perspective you know now if you have programmers who are trying to automate the networks that's the tool to

42:46 go to because you really don't need to know what routing is because for the programmers really I need to know how to

42:52 program it I need to know what data inest to run my AI models and close loop automation

42:58 and this is exactly what we're using to bring some of these outcomes from a juniper Paragon suit perspective U you

43:07 will see Paragon slides I don't have too many Paragon slides here but if you see Paragon slides you will see Paragon has

43:12 the capability to do all the things from a life cycle perspective it starts with planning uh starts with Device

43:19 onboarding uh Service proping uh optimization uh AI Ops with ingesting

43:26 all the data from from the network but in the end the question is what are the outcomes that are important for

43:33 customers right we do all this with functionality but we also want to measure the outcomes with Paragon and

43:39 some of the things that we measuring the outcomes are listed on this slide for example can we onboard the devices

43:45 quickly you know most of the times when the devices are deployed the people who Dr and stack stop are not the guys who

43:52 are experts so can we automate that uh after the racking and stacking to on to onboard the device onto the network can

44:00 we go to zero unplanned outages this is driven by things like active Assurance so every device in the network can be a

44:07 sensor which is actively uh looking at uh things in the network and making

44:13 decisions before failure even occurs um so this is an important aspect and me

44:19 once you have active Assurance you can do a lot more things then just proactive monitoring but also when things fail and go back into service you can actually uh

44:27 look at that if things are working before you put those things back into the service so lot of capabilities can bring come to the table um with

44:35 autonomous cap capacity Auto automation yeah most of the networks you know the

44:40 changes in the networks are not very frequent you make the changes you provision the networks 90% of the time

44:46 the automation tools are going to work due to optimization right and and this is where you uh the goal here is to

44:54 optimize the resources uh based on the usage because you most networks will

45:00 have different usage during the day and the night and based on that you can turn on and off things and then go to green

45:06 networking the green networking has multiple things I can turn on and off things based on the usage right I can

45:13 also route things through the most power optimized parts of the network right for

45:20 example if some parts some devices are running at uh 70% uh work uh throughput

45:27 some other devices are running at 20% throughput it's actually better to move that traffic to the 20% because now

45:34 overall footprint of network footprint of I mean the power footprint of the network can go down so so lot of

45:40 outcomes it can be driven here but in the end the goal is to simplify things to move things faster and to drive uh

45:48 efficiencies from a both optimization perspective and from a power management

45:54 perspective and coming to one of the last bullets that stalling had about security I think we

46:01 can all agree that security is fundamental to the networks right

46:07 security is fundamental to any part of the business today and and we know security is cyber security is top of

46:13 mind for everybody every CIO every CTO in the world when it comes to the networking infrastructure the routers

46:21 are not always sitting in a controlled facility uh many times they're sitting

46:26 in a party facility in a Colo in a place you actually don't control things hence

46:31 the infrastructure uh security really becomes important which means you know having things like TPM 2.0 which me

46:39 which means your router Hardware is coming from the right vendor which means the software is not is not tampered so

46:46 it's important you know to have those capabilities from a security perspective once you do that then you start securing

46:51 your line which is okay any data that goes out of a router can encrypt it and

46:56 line rate Max becomes critical there now we have secured the infrastructure we

47:03 secured the line there are still vulnerabilities right because the traffic that's coming in is kep

47:09 sometimes with the malicious attacks it's you could can drive DS attacks so

47:14 here uh two things come to the uh uh to the rescue one when these attacks are

47:20 happening can you detect them fast right uh that's number one which means the sampling rates from the n Forks are

47:27 going to be important uh number two once those attacks are detected can you actually provision the network really

47:33 really fast to to prevent those that's where you know fast provisioning of the

47:38 filters is really really important and number three uh can you actually uh be

47:44 very specific here because in these filters we're not just looking at five tle can I look deeper into the payload

47:51 and prevent that malicious traffic so answer is yes this is where we be

47:56 heading with Express 5 um and with the platform architecture to to full

48:03 Hardware security Max encryption and and a fully secure you know from a DS

48:09 attacks preventing the DS attacks perspective and with that uh I'm going

48:15 to come to my final two slides uh so Dwayne talked about about this a little

48:20 bit so I may just talk about this so we have we announced our 800 Gig Generation

48:26 platform both fixed F factor and uh for for the model chassis uh we you've seen

48:33 some of these already ATC and Mobile World Congress uh if if you're one of the customers you already have it uh in

48:40 your Labs um and and uh we are driving 28.8 terabits on the fix form factor uh

48:47 and 28.8 terab per line card on on the model chassis with three form factors fourth slot eighth slot and 16th

48:54 slot and on the chassis the new line cards are backwards compatible uh with

49:00 the current 400 gig line cards you can upgrade without changing the fabric uh you can upgrade with changing the fabric

49:06 in which case you will get the full throughput of the new line cards uh the scale and capability that we talked about in terms of FIB tunnels HQ H Qs

49:15 everything is going to be available uh on these platforms and we are very very excited about this and very excited to

49:21 see you know this surveys results where most of the customers are looking to

49:26 move move to towards 800 gig but at the same time 400 gig is going to be there for a long long time and this inop is

49:33 super super important um like I said uh these are

49:39 some of the customers you know we have been working with and they these are these are the publicly referenceable customers but there are large set of

49:46 customers which is not in this list so I don't want you to feel bad because this is something we can't public reference

49:51 on this onless we have agreement with you uh 400 gig has been uh you know deployed very widely already we at

49:59 junipur are leading this transition and we have word of Confidence from customers that 800 gig is going to be

50:04 critical for them and and this a 400 gig and 800 gig interop is going to be critical for them and we'll have to

50:11 continue to working uh with you uh to make this happen in a sustainable

50:17 automated and secure way so that was my last slide let's go

50:24 back back to you and see what questions we might yep absolutely thank you uh thanks both

50:31 Dwayne and AIT and we do have a well done on timing uh we got about 10 minutes left so we do have a couple of

50:38 questions in but um for the audience if you have questions Now's the Time to

50:44 fire them away and we'll see what we can uh address here in the time left um let me

50:50 start with um the comments or your comments

50:57 Adit about no limitations in the 800 gig

51:02 implementation um looks like they're they're just looking to understand uh

51:07 what the industry limitations are around 800 gig right now um so is it kind of

51:12 like a I don't know maybe a a litus test or something like you

51:18 know just just to explain what I guess what what the um what no limitations

51:23 means and and I guess what it what it doesn't yeah thanks so like I think one of the

51:30 slides it said what does 800 gig bring to the table right so 800 gig brings to

51:35 the table 800 gig uh 800 gig IP over dwdm 2x 400 8 by 100 and also 10 gig but

51:44 let's stick to the higher rates it's important that every port is capable of doing all of them uh because we do see

51:51 sometimes people talk about 800 gig but they don't support 800 gig e you know what that means is you know I going to do DW efficiently uh uh if you if if you

51:59 don't support 800 gig in every port um we do see sometimes people talk about uh

52:05 uh 400 gig 2x 400 gig but they really not able to support uh the two LC

52:11 connectors uh which means um probably you can still do 2 by 400 gig but with

52:17 not the right connectors uh from a 400 gig perspective and then we start to see some folks talk about limitations like

52:23 what ports can you do IP day versus not so I think those those are some of the limitations I was talking about it's

52:29 really really good to not have those limitations so you get the full flexibility and take the advantage of

52:34 8800 gig for all the value proposition that 800 gig brings to the

52:40 table great yeah thanks the

52:45 um yeah I had a question myself but we've got let me uh hold up on me we'll

52:50 get these in and and then see if it comes up um

52:59 the yeah maybe maybe this one uh again this is well this one's looking at the um the the Silicon comments um and

53:07 somebody's interested how do you support mix mode operation both 400 and 800 gig

53:13 interfaces uh concurrently uh okay thanks so this goes

53:19 back to the design uh of the Silicon um so the so the Silicon is uh

53:25 designed where we have this is basically 100 Gig s technology as we go to 800 gig

53:31 so every port has 8 by 100 Gig sis uh routed to it uh and and using that uh

53:40 you can we can we can drive uh through a software configuration 800 gig E port

53:45 because with 8 by 100 Gig cies uh we can drive 2x 400 gig and with with the max

53:51 behind that to do two ports of 100 Gig and similarly we can do for for 100 Gig because of the the the way the Silicon

53:58 the S technology in the Silicon itself is designed um allows to do the mix mode

54:04 of 400 gig and 800 gig and every port is individually configurable without any impact from how the other Port is

54:12 configured makes sense so the 100 Gig sir yeah I mean at ofc there was a lot of talk about moving to to 200 gig um

54:21 for very specific within the data center kind of I don't know if off topic but just curious how that Thinking Out Loud

54:28 how that relates to to what you're doing with with a product that's uh addressing

54:33 the the WAN applications will the do would you actually reduce some of the flexibility moving to 200 gig CS or is

54:41 it even a road map issue as far as Juniper is concerned yeah we are committed to 1.60

54:49 let's let's say what does 200 gig cies bring to the table 200 gig cies will bring 1.6 to the table uh we are

54:55 committed to 1 60 uh without diverging all the details there will be an Express cicon in that will drive uh that

55:02 transition um and again same thing important as we go to 1.60 not to have

55:09 the same limitations so you have uh silicon is designed for all the uh Port

55:15 uh speeds and feeds for 200 gig 30s um and similarly as we go from 800 G 1.6t

55:21 we we follow the same Concepts as industry um without any limitation

55:26 again yep great um another question came in actually

55:33 before I do that let me just remind there there should be a brief survey that pops up for the audience just uh if

55:39 you could please fill that out before we close out uh we we do gather up and and use that feedback to help with our

55:46 webinars um this question um and Dwayne feel free to to

55:52 to uh to weigh in as well this one um how important is interoperability for

55:57 800 gig Optical interfaces for Metro and for regional reach um a big

56:04 question I get asked as well um maybe admit if you want to start with that and Dwayne if you have some thoughts as

56:12 well I think let's see what does interoperability mean right U so

56:17 interoperability means let's start with 400 G before we even go to 800 gig in the 400 gig we do as industry we draw

56:25 the standard ation to 400 gig ZR ZR plus and Z Plus Zer DPM right uh and what

56:33 that means is that I can have a juniper 400 gig optic on one side and I can have

56:40 a third party 400 gig optic on the other side and you don't have to think about it uh if these will interrupt on the

56:47 line as long as you uh you using the standard modes they will interrupt right

56:53 um and that's important because otherwise you got into very best Boke scenarios and uh uh it may or may not

57:00 work uh you know in the field same thing goes to 800 gig is that the

57:07 standardization of these modes is important and uh and what that gives us is in in the generation is the same

57:14 thing a peace of mind that that your operations teams does not have to think

57:21 uh about you know is this interoperable or not and I believe this is very important because some vendors might

57:29 look at proprietary modes and stuff like that I think that mostly add add things for complexity sake and honestly those

57:35 proprietary modes end up taking more power than it's required for the applications so ask me I think

57:42 interoperability is something the biggest goal that we need to adapt in the industry um and if there are some

57:48 special things that are required let's make it interoperable yeah I I totally agree um

57:54 and and I think um uh couple weeks ago we had the we had ofc2 24 and oif and ea

58:03 uh both had a tremendous number of demonstrations multi multivendor environments with coherent Optics 400

58:10 800 gig coherent Optics from multiple vendors that were in our PTX 1000002

58:16 that we just announced that we showed you today and it worked beautifully um they were they were all standards

58:22 compliance uh standard compliant and uh you know even 800 gig Zero Solutions

58:29 were demonstrated so it was exciting to see and I think the last couple of years we've seen you know that uh

58:36 demonstration and I think we had 11 or 12 different co coherent Optics vendors

58:44 plugged into our our router um and we showed interoperability and it's the

58:49 flexibility for CS support as well as there so uh really exciting how fast

58:56 markets moving and how you know the standards bodies have really helped to create uh this type of interoperability

59:02 that I think at the end of the day the customers win yeah glad glad you mentioned the oif

59:08 demo uh Dwayne um I um we did a video with with the oif and and with juniper

59:15 and that should be out soon so um on exactly that the the massive amount of

59:20 interoperability demonstrated around the uh the coherent Optics specifically in that case uh great let um we are at the

59:27 top of the hour so uh we'll we'll close out here but I I want to thank um of

59:33 course Dwayne and Amit and Juniper for sponsoring and presenting and thank you to the audience for attending and for

59:39 your your questions thank you everybody uh and have a great day bye thank you

59:45 thank

59:55 you

1:00:24 e but

Show more