由买买提看人间百态

boards

本页内容为未名空间相应帖子的节选和存档,一周内的贴子最多显示50字,超过一周显示500字 访问原贴
EmergingNetworking版 - Linux running on commoditized switches
相关主题
ACI vs. NSX+Cumulus在一个device里面同时实现router和switch有什么好处?
关于SDN 的预言怎样测试network collision?
职业规划Juniper QFrabric 是咋回事?
John Chambers needs to go大家看好Arista Networks, Palo Alto Networks 吗?
思科的ACI要说多少次。。。
感觉这是20年来网络界最大的技术变动时代。没人讨论这个消息?
求审稿机会 (Ethernet networking, switch/router, data center)Tor network隐形浏览安全吗?
关于SNMPAnother real world problem
相关话题的讨论汇总
话题: cumulus话题: linux话题: switch话题: network话题: switches
进入EmergingNetworking版参与讨论
1 (共1页)
s*****g
发帖数: 1055
1
Not sure you guys heard about Cumulus Networks, its own Linux(Based on
Debian) runs on 64 port 10G OEM switches, user can then run open source
routing protocols(Bird, Quagga etc), the switch costs less than half of
similar products offered by traditional vendors, initially it targets at
data center TOR switch market.
I think it will get some traction especially for those customers who are
very cost sensitive, it is also going to be useful on Internet edge, if you
want to do BGP traffic engineering on the fly, no equipment vendor can come
closer in terms of program-ability comparing to a Linux router, a lot of
magic can happen here.
z**r
发帖数: 17771
2
haven't heard about this company, but this concept isn't new right?
F5 was running the opensource protocol daemons 10 years ago on their BIGIP
loadbalancers ...
When you say cost, I'm not a CFO, but I would say the cost should really
mean Total Ownership Cost, in most times, the operational cost is more
expensive than the device itself.
And when you have this bare metal device, I personally think the SDN
solution is a way more promising one

you
come

【在 s*****g 的大作中提到】
: Not sure you guys heard about Cumulus Networks, its own Linux(Based on
: Debian) runs on 64 port 10G OEM switches, user can then run open source
: routing protocols(Bird, Quagga etc), the switch costs less than half of
: similar products offered by traditional vendors, initially it targets at
: data center TOR switch market.
: I think it will get some traction especially for those customers who are
: very cost sensitive, it is also going to be useful on Internet edge, if you
: want to do BGP traffic engineering on the fly, no equipment vendor can come
: closer in terms of program-ability comparing to a Linux router, a lot of
: magic can happen here.

s*****g
发帖数: 1055
3
Actually this company just came out of stealthy mode, one founder is ex-
Cisco/Google fellow, I yet to find out how significant their work is (I
agree, the idea of running Linux on a commodity switch is not new at all). I
am
not sure what operational cost you are referring to, the reality is most
small to medium companies that have tech operation teams, they don't have/
don't need dedicated network engineers, yet they have many Linux experts in
house, so operationally running Linux on network devices will actually cut
operational cost, imagine now all devices can be under one central control
via puppet/chef/salt etc.

【在 z**r 的大作中提到】
: haven't heard about this company, but this concept isn't new right?
: F5 was running the opensource protocol daemons 10 years ago on their BIGIP
: loadbalancers ...
: When you say cost, I'm not a CFO, but I would say the cost should really
: mean Total Ownership Cost, in most times, the operational cost is more
: expensive than the device itself.
: And when you have this bare metal device, I personally think the SDN
: solution is a way more promising one
:
: you

p*****s
发帖数: 344
4
no magic to me. if the linux os is to be the controller for the 640G switch
the bottle neck is still there. no matter how many application it can run it
won't scale. you either get a flexible 10G flexible application router or a
100G baremetal switch, can not get both without special hardware.
m******t
发帖数: 4077
5
不懂,iox 也是linux kernel 了,就是app的区别?

you
come

【在 s*****g 的大作中提到】
: Not sure you guys heard about Cumulus Networks, its own Linux(Based on
: Debian) runs on 64 port 10G OEM switches, user can then run open source
: routing protocols(Bird, Quagga etc), the switch costs less than half of
: similar products offered by traditional vendors, initially it targets at
: data center TOR switch market.
: I think it will get some traction especially for those customers who are
: very cost sensitive, it is also going to be useful on Internet edge, if you
: want to do BGP traffic engineering on the fly, no equipment vendor can come
: closer in terms of program-ability comparing to a Linux router, a lot of
: magic can happen here.

a**********k
发帖数: 1953
6
They had a tech talk @ Google a week or so ago. The room was
packed although most of the audience were not Network
professional.The founder is an ex-Cisco hardware guy.

you
come

【在 s*****g 的大作中提到】
: Not sure you guys heard about Cumulus Networks, its own Linux(Based on
: Debian) runs on 64 port 10G OEM switches, user can then run open source
: routing protocols(Bird, Quagga etc), the switch costs less than half of
: similar products offered by traditional vendors, initially it targets at
: data center TOR switch market.
: I think it will get some traction especially for those customers who are
: very cost sensitive, it is also going to be useful on Internet edge, if you
: want to do BGP traffic engineering on the fly, no equipment vendor can come
: closer in terms of program-ability comparing to a Linux router, a lot of
: magic can happen here.

s*****g
发帖数: 1055
7
Sorry to revive this old thread, but I am telling you, people ... this is
real, we did a greenfield deployment of Cumulus switch, it works great! A 64
port 10G switch based on Trident running Cumulus costs less than half of
the price we used to pay for traditional vendors.
Spine/leaf topology with 1U/2U fixed-configuration switches running Cumulus
will be the way to build next-gen data center IP-fabric of any scale. If you
are still thinking of buying expensive multi-slot big switches from
traditional vendors, you need to think twice.
Imagine all the prior-impossibles you can now try on this open platform.
Network will always be a dumb pipe, network devices and servers will be one
entity whose sole role is to serve applications, we should leave
intelligence to applications.

【在 a**********k 的大作中提到】
: They had a tech talk @ Google a week or so ago. The room was
: packed although most of the audience were not Network
: professional.The founder is an ex-Cisco hardware guy.
:
: you
: come

L******t
发帖数: 1985
8
Data center TOR? Where are you if I may ask?
Is it a Layer 2 or Layer 3 network? How is HA supported?
Anything else great except for economy?
I've been hearing that Cumulus product is great, seemingly related to SDN
stuff. But no details.

64
Cumulus
you
one

【在 s*****g 的大作中提到】
: Sorry to revive this old thread, but I am telling you, people ... this is
: real, we did a greenfield deployment of Cumulus switch, it works great! A 64
: port 10G switch based on Trident running Cumulus costs less than half of
: the price we used to pay for traditional vendors.
: Spine/leaf topology with 1U/2U fixed-configuration switches running Cumulus
: will be the way to build next-gen data center IP-fabric of any scale. If you
: are still thinking of buying expensive multi-slot big switches from
: traditional vendors, you need to think twice.
: Imagine all the prior-impossibles you can now try on this open platform.
: Network will always be a dumb pipe, network devices and servers will be one

s*****g
发帖数: 1055
9
Data center ToR/Leaf/Spine, No layer 2, IP-fabric. HA on box level is
expensive and buggy, you move the redundancy from box level to network level
to achieve HA, such that any box's failure will not affect applications,
our applications do not need sub-second failover, if we do
need faster convergence, fast BFD timer will be more than enough.
Every vendor has its own definition of SDN, in case of Cumulus, since it is
an open OS, you can write your own software (whether it is a simple shell
script or puppet or whatever) to dynamically control the box.
For example, on traditional network devices, the only way to monitor the
device's health is through SNMP, which has a lot of problems: MIB itself is
confusing/vendor specific, let along lack of MIB
support on metrics you are interested in, no standard interface for
administrator to dynamically react to any
abnormal readings ..., now with Cumulus, you can have native metric
collection agent running on the box to collect any metric you want, you can
write a simple program to integrate the metric with standard alerting system
and to change the configuration in real time if a metric
is out of boundary.

【在 L******t 的大作中提到】
: Data center TOR? Where are you if I may ask?
: Is it a Layer 2 or Layer 3 network? How is HA supported?
: Anything else great except for economy?
: I've been hearing that Cumulus product is great, seemingly related to SDN
: stuff. But no details.
:
: 64
: Cumulus
: you
: one

z**r
发帖数: 17771
10
Cisco ACI is similar solution, Nexus 9k is way cheaper than Nexus 7k.

64
Cumulus
you
one

【在 s*****g 的大作中提到】
: Sorry to revive this old thread, but I am telling you, people ... this is
: real, we did a greenfield deployment of Cumulus switch, it works great! A 64
: port 10G switch based on Trident running Cumulus costs less than half of
: the price we used to pay for traditional vendors.
: Spine/leaf topology with 1U/2U fixed-configuration switches running Cumulus
: will be the way to build next-gen data center IP-fabric of any scale. If you
: are still thinking of buying expensive multi-slot big switches from
: traditional vendors, you need to think twice.
: Imagine all the prior-impossibles you can now try on this open platform.
: Network will always be a dumb pipe, network devices and servers will be one

相关主题
感觉这是20年来网络界最大的技术变动时代。在一个device里面同时实现router和switch有什么好处?
求审稿机会 (Ethernet networking, switch/router, data center)怎样测试network collision?
关于SNMPJuniper QFrabric 是咋回事?
进入EmergingNetworking版参与讨论
s*****g
发帖数: 1055
11
What is the price after discount for a 64 and 128 port 10g switch offered by
Cisco? ballpark number.

【在 z**r 的大作中提到】
: Cisco ACI is similar solution, Nexus 9k is way cheaper than Nexus 7k.
:
: 64
: Cumulus
: you
: one

L******t
发帖数: 1985
12
Is Cumulus product capable of Layer 2? Or at least is it on roadmap?
Any overlay support or plain Layer 3 only?
Is it spine-leaf two-layer mesh topology? Two TORs each rack for redundancy?
Is the metric collection tool you mentioned something de facto or
proprietary?

level
is
is

【在 s*****g 的大作中提到】
: Data center ToR/Leaf/Spine, No layer 2, IP-fabric. HA on box level is
: expensive and buggy, you move the redundancy from box level to network level
: to achieve HA, such that any box's failure will not affect applications,
: our applications do not need sub-second failover, if we do
: need faster convergence, fast BFD timer will be more than enough.
: Every vendor has its own definition of SDN, in case of Cumulus, since it is
: an open OS, you can write your own software (whether it is a simple shell
: script or puppet or whatever) to dynamically control the box.
: For example, on traditional network devices, the only way to monitor the
: device's health is through SNMP, which has a lot of problems: MIB itself is

L******t
发帖数: 1985
13
Does Arista offer similar solution? Any comparison?
I believe Dell has something similar. What makes Cumulus better than Dell?

by

【在 s*****g 的大作中提到】
: What is the price after discount for a 64 and 128 port 10g switch offered by
: Cisco? ballpark number.

s*****g
发帖数: 1055
14
Of course, it can do L2, L3 even VXLAN.
Spine-leaf is not exactly mesh, depending on your specific requirement (
scalability, over-subscription ratio etc), you can have multiple layers of
spine-leaf. You can have two ToRs on each rack, or run routing protocols
from servers to two ToRs in different racks, there is really nothing fancy
there. In my case, we simply do not care about a rack going off line because
application takes care of it.
No, the metric collection tools are all open source, google collectd/gmond
for details, you can easily write scripts to collect your application
specific metrics by using the APIs, open source tools give you so much
flexibility to integrate to your monitoring system.

redundancy?

【在 L******t 的大作中提到】
: Is Cumulus product capable of Layer 2? Or at least is it on roadmap?
: Any overlay support or plain Layer 3 only?
: Is it spine-leaf two-layer mesh topology? Two TORs each rack for redundancy?
: Is the metric collection tool you mentioned something de facto or
: proprietary?
:
: level
: is
: is

s*****g
发帖数: 1055
15
Cumulus to Dell is like Microsoft/Redhat to Dell, Cumulus does not make
switches, it makes software and the software runs on Dell hardware.
Can Arista switch run OS of my choice? can I build my own software package
and run the software on it? granted, all vendors can provide feature sets
Cumulus can not match (at least for now), but how much percentage of those
features are needed by you?

【在 L******t 的大作中提到】
: Does Arista offer similar solution? Any comparison?
: I believe Dell has something similar. What makes Cumulus better than Dell?
:
: by

m**k
发帖数: 290
16
This is not new. Many switch vendors run linux in their box.
The main difference is price. And the reason for higher price of most
vendors is they have too many bad software engineers.
To network engineers, the difference is whether to expose linux commands to
the user or to have a unified cli. People have different preferences, and
most cli sucks.
Also, linux support for network device (routing/switching) is not very good.
There are still a lot of work to be done.
m**k
发帖数: 290
17

and the more importantly, bad managing.

【在 m**k 的大作中提到】
: This is not new. Many switch vendors run linux in their box.
: The main difference is price. And the reason for higher price of most
: vendors is they have too many bad software engineers.
: To network engineers, the difference is whether to expose linux commands to
: the user or to have a unified cli. People have different preferences, and
: most cli sucks.
: Also, linux support for network device (routing/switching) is not very good.
: There are still a lot of work to be done.

z**r
发帖数: 17771
18
different customers get different discounts, the list price of Nexus 93128 (
96 10G-T and 8x40G, including 8 QSFP+) is about $30K. Some customer may get
over 50% discount, let's say you get 40% off, the price would be $18k or so?
remember this includes 8 QSFP+, which could cost a couple of thousands
easily

by

【在 s*****g 的大作中提到】
: What is the price after discount for a 64 and 128 port 10g switch offered by
: Cisco? ballpark number.

z**r
发帖数: 17771
19
yes, Arista has fabric solution as well

【在 L******t 的大作中提到】
: Does Arista offer similar solution? Any comparison?
: I believe Dell has something similar. What makes Cumulus better than Dell?
:
: by

s*****g
发帖数: 1055
20
This price for a 128 port 10G is not ridiculously high, but still high
comparing to bare metal+Cumulus, plus Dell S6000 is only 1RU vs Nexus93128's
3RU.

(
get
so?

【在 z**r 的大作中提到】
: different customers get different discounts, the list price of Nexus 93128 (
: 96 10G-T and 8x40G, including 8 QSFP+) is about $30K. Some customer may get
: over 50% discount, let's say you get 40% off, the price would be $18k or so?
: remember this includes 8 QSFP+, which could cost a couple of thousands
: easily
:
: by

相关主题
大家看好Arista Networks, Palo Alto Networks 吗?Tor network隐形浏览安全吗?
要说多少次。。。Another real world problem
没人讨论这个消息?Cisco's italian flavor
进入EmergingNetworking版参与讨论
z**r
发帖数: 17771
21
let's say you get a qos issue, who are you going to work with?

's

【在 s*****g 的大作中提到】
: This price for a 128 port 10G is not ridiculously high, but still high
: comparing to bare metal+Cumulus, plus Dell S6000 is only 1RU vs Nexus93128's
: 3RU.
:
: (
: get
: so?

s*****g
发帖数: 1055
22
If you have problem with your home PC, who are you going to work with?

【在 z**r 的大作中提到】
: let's say you get a qos issue, who are you going to work with?
:
: 's

L******t
发帖数: 1985
23
In case of L2, how is the MAC explosion issue addressed? I believe commodity
switches have MAC table like 64K in size? That said, I acknowledge L3 mode
is good enough for most data centers.
Can you give me a few examples of Cumulus customers? PM is okay. Not being
nosy but I'm wondering what kind and size of customer is inclined to such
whitebox solutions.

because

【在 s*****g 的大作中提到】
: Of course, it can do L2, L3 even VXLAN.
: Spine-leaf is not exactly mesh, depending on your specific requirement (
: scalability, over-subscription ratio etc), you can have multiple layers of
: spine-leaf. You can have two ToRs on each rack, or run routing protocols
: from servers to two ToRs in different racks, there is really nothing fancy
: there. In my case, we simply do not care about a rack going off line because
: application takes care of it.
: No, the metric collection tools are all open source, google collectd/gmond
: for details, you can easily write scripts to collect your application
: specific metrics by using the APIs, open source tools give you so much

z**r
发帖数: 17771
24
it depends, so it's pain. and it's more painful for non-tech background pc
users!
my point is, you probably can save a bit from buying the product itself, but
your operating cost may get much higher, so eventully you end up with
spending more with cheap product. and you know many companies come and go,
they probably won't exist in next 6 months.

【在 s*****g 的大作中提到】
: If you have problem with your home PC, who are you going to work with?
s*****g
发帖数: 1055
25
Your point is valid, but the reality is we use opensource software and bare-
metal out-of-shelf servers, we don't need software support and for hardware,
we just RMA it if it breaks. We are expecting exactly the same from network
devices, in fact, all the switch configuration can be controlled by puppet,
with ONIE (which is equivalent to server side PXE), you really can achieve
zero touch provisioning, the whole data center can be torn down and rebuilt
with git/puppet in hours.
We will see how this approach pans out.

but

【在 z**r 的大作中提到】
: it depends, so it's pain. and it's more painful for non-tech background pc
: users!
: my point is, you probably can save a bit from buying the product itself, but
: your operating cost may get much higher, so eventully you end up with
: spending more with cheap product. and you know many companies come and go,
: they probably won't exist in next 6 months.

w*f
发帖数: 111
26
你没钱,or 要省钱,可以买2手的Cisco/Juniper.
Tech Support 与产品质量,价钱,一样重要!
因为便宜而买OpenSource的网络设备,how does it benefit your business users
that IT department serves?
h*****a
发帖数: 1992
27
Does Cumulus switch support config replace?

bare-
hardware,
network
puppet,
achieve
rebuilt

【在 s*****g 的大作中提到】
: Your point is valid, but the reality is we use opensource software and bare-
: metal out-of-shelf servers, we don't need software support and for hardware,
: we just RMA it if it breaks. We are expecting exactly the same from network
: devices, in fact, all the switch configuration can be controlled by puppet,
: with ONIE (which is equivalent to server side PXE), you really can achieve
: zero touch provisioning, the whole data center can be torn down and rebuilt
: with git/puppet in hours.
: We will see how this approach pans out.
:
: but

s*****g
发帖数: 1055
28
Huh? you didn't get it, man "vi"

【在 h*****a 的大作中提到】
: Does Cumulus switch support config replace?
:
: bare-
: hardware,
: network
: puppet,
: achieve
: rebuilt

s*****g
发帖数: 1055
29
I don't get your argument. Is Linux high quality? Is Cisco/Juniper high
quality?

【在 w*f 的大作中提到】
: 你没钱,or 要省钱,可以买2手的Cisco/Juniper.
: Tech Support 与产品质量,价钱,一样重要!
: 因为便宜而买OpenSource的网络设备,how does it benefit your business users
: that IT department serves?

c*****i
发帖数: 631
30
不同的用户不同的需求而已,多个选择总是好的。
这就像一般人买个品牌机,懂点电脑的自己装机然后装个windows,geek自己装机然后
装个免费的ubuntu,牛人自己装机然后编译了个linux装上去。
相关主题
VMware to buy Nicira for $1.26B关于SDN 的预言
SDN and Cloud Networking职业规划
ACI vs. NSX+CumulusJohn Chambers needs to go
进入EmergingNetworking版参与讨论
h*****a
发帖数: 1992
31
小弟刚来,拜了大拿先。
I think there is some interesting problems in managing the configurations of
these spines/leafs. Take an example, assuming you run BGP to the TORs, and
you use Quagga. When you want to add a TOR, you not only have to provision
the TOR, but also change the configs of the connecting leafs. If you don't
have config replacing ability, you have to generate specific config snips
for the leaf to add the new TOR as a bgp neighbor, so that it won't
interrupt other connected TORs. If your cluster is large enough, this would
happen all the time, and you need to find some way to manage all of these
snips. This is much more than managing configs for each device. It can be
done, but that's more than an vi can do.
Now if you have config replace ability, that is, you can push your entire
config without having to worry about service interruption, your zero touch
provision would work.
So, does Cumulus have config replace ability?

【在 s*****g 的大作中提到】
: Huh? you didn't get it, man "vi"
h*****a
发帖数: 1992
32
I guess the question is how can you avoid tearing down the entire datacenter
to change switch configs. One difference between the serverland and
networkland is that in network the changes often need to be coordinated
among multiple devices, and need to happen in a certain order to avoid
service interruption. Does Puppet take care of this aspect or do I have to
build some overlay on top?

bare-
hardware,
network
puppet,
achieve
rebuilt

【在 s*****g 的大作中提到】
: Your point is valid, but the reality is we use opensource software and bare-
: metal out-of-shelf servers, we don't need software support and for hardware,
: we just RMA it if it breaks. We are expecting exactly the same from network
: devices, in fact, all the switch configuration can be controlled by puppet,
: with ONIE (which is equivalent to server side PXE), you really can achieve
: zero touch provisioning, the whole data center can be torn down and rebuilt
: with git/puppet in hours.
: We will see how this approach pans out.
:
: but

w*f
发帖数: 111
33
OpenSource does not mean it's good quality. Network hardware running Linux
does not make it good quality network gear. As a customer, good tech support
and features must be considered by looking at $
If I am a Wall Street trader, do I care "my order is carried by Cisco switch
, or white box switches running Linux"?
He needs networking to be stable and quick.

【在 s*****g 的大作中提到】
: I don't get your argument. Is Linux high quality? Is Cisco/Juniper high
: quality?

f*****m
发帖数: 416
34
这东西的市场就是MSDC,应用场景相对简单,需要的feature固定,单一。重要的是价
钱便宜,量又足。反正象亚马逊之类的喜欢自己折腾 (其实还不是没个MSDC都喜欢自
己折腾--有几个自己试图折腾一把之后,发现还是卖现成的好)
s*****g
发帖数: 1055
35
大拿不敢当,大家讨论而已.
The scenario you described is not a problem at all for puppet, puppet has a
concept of "template", which are Ruby snippets that can generates
configuration on the fly based on conditions.
Think in server management land, there are a lot configuration duplication
across servers, which configurations go before/after which configurations,
what configuration depend on what other configuration is well taken care of.
I can not imagine any large scale data center deployment will need human
interaction, the configuration should be generated based on topology graph(.
DAT file).
Humans make mistakes but computers do not.
Disclaimer: since for me this is just a one box green field deployment, I
haven't done all the automation yet beyond pushing admins' ssh public keys
to the box, but in theory I don't see any problem
of doing that.

of
and
would

【在 h*****a 的大作中提到】
: 小弟刚来,拜了大拿先。
: I think there is some interesting problems in managing the configurations of
: these spines/leafs. Take an example, assuming you run BGP to the TORs, and
: you use Quagga. When you want to add a TOR, you not only have to provision
: the TOR, but also change the configs of the connecting leafs. If you don't
: have config replacing ability, you have to generate specific config snips
: for the leaf to add the new TOR as a bgp neighbor, so that it won't
: interrupt other connected TORs. If your cluster is large enough, this would
: happen all the time, and you need to find some way to manage all of these
: snips. This is much more than managing configs for each device. It can be

s*****g
发帖数: 1055
36
Agree, I heard the story behind why Amazon did their own way is because the
quote from a major vendor to build their scale network was humongous which
simply did not make economic sense for them. When things reach a certain
scale, you have to think about alternative ways to save money.

【在 f*****m 的大作中提到】
: 这东西的市场就是MSDC,应用场景相对简单,需要的feature固定,单一。重要的是价
: 钱便宜,量又足。反正象亚马逊之类的喜欢自己折腾 (其实还不是没个MSDC都喜欢自
: 己折腾--有几个自己试图折腾一把之后,发现还是卖现成的好)

s*****g
发帖数: 1055
37
"tear down" is an exaggeration(it can happen though), my point was when a
box (be it a server or
network device)dies, you need to have the ability to put a whitebox in,
connect all the cables and then flip the power switch, and, boom, you are in
business in no time.

datacenter

【在 h*****a 的大作中提到】
: I guess the question is how can you avoid tearing down the entire datacenter
: to change switch configs. One difference between the serverland and
: networkland is that in network the changes often need to be coordinated
: among multiple devices, and need to happen in a certain order to avoid
: service interruption. Does Puppet take care of this aspect or do I have to
: build some overlay on top?
:
: bare-
: hardware,
: network

h*****a
发帖数: 1992
38
Days that we hand-wrote configs are long gone. Everything is automatically
generated and pushed, for standardization and correctness. The problem I
described is not bringing up 1000 switches and config them initially. Once
the network is up and running, the config changes have to be incremental to
avoid service interruption. You basically have to think what are the
possible changes you are going to make before hand so that you can make a
template for each type of these changes. This becomes harder to manage when
you have multiple clusters. You'll need something that overlays puppet.

a
of.
(.

【在 s*****g 的大作中提到】
: 大拿不敢当,大家讨论而已.
: The scenario you described is not a problem at all for puppet, puppet has a
: concept of "template", which are Ruby snippets that can generates
: configuration on the fly based on conditions.
: Think in server management land, there are a lot configuration duplication
: across servers, which configurations go before/after which configurations,
: what configuration depend on what other configuration is well taken care of.
: I can not imagine any large scale data center deployment will need human
: interaction, the configuration should be generated based on topology graph(.
: DAT file).

h*****a
发帖数: 1992
39
It's a consensus that it's hard to run L2 for this scale-out design with
commodity switches. The IP-MAC translation has to happen very close to the
edge, or even at the host level. There are quite a few (famous) papers in
the past several years trying to address how/where to do the IP-MAC
translation in scale.

commodity
mode

【在 L******t 的大作中提到】
: In case of L2, how is the MAC explosion issue addressed? I believe commodity
: switches have MAC table like 64K in size? That said, I acknowledge L3 mode
: is good enough for most data centers.
: Can you give me a few examples of Cumulus customers? PM is okay. Not being
: nosy but I'm wondering what kind and size of customer is inclined to such
: whitebox solutions.
:
: because

L******t
发帖数: 1985
40
Any link to those papers?
Cisco ACI is designed to be able to handle L2 in MSDC. Of course, there are
some proprietary stuff and it's costlier. :)

【在 h*****a 的大作中提到】
: It's a consensus that it's hard to run L2 for this scale-out design with
: commodity switches. The IP-MAC translation has to happen very close to the
: edge, or even at the host level. There are quite a few (famous) papers in
: the past several years trying to address how/where to do the IP-MAC
: translation in scale.
:
: commodity
: mode

相关主题
John Chambers needs to go求审稿机会 (Ethernet networking, switch/router, data center)
思科的ACI关于SNMP
感觉这是20年来网络界最大的技术变动时代。在一个device里面同时实现router和switch有什么好处?
进入EmergingNetworking版参与讨论
h*****a
发帖数: 1992
41
VL2 in Sigcomm 2009. http://research.microsoft.com/pubs/80693/vl2-sigcomm09-final.pdf
Microsoft apparently made modification in the server OS so that the ARP
happens at host level.
And check the papers that VL2 cites and in which VL2 is cited.
So far, all the solutions seem to come with significant cost. The cost may
not be directly on money, but on effort.
So, back to square one, why do we need large scale L2? Is L3 sufficient?

are

【在 L******t 的大作中提到】
: Any link to those papers?
: Cisco ACI is designed to be able to handle L2 in MSDC. Of course, there are
: some proprietary stuff and it's costlier. :)

s*****g
发帖数: 1055
42
Because there are applications that need flat layer2 network, vMotion for
one.
There is simply no one-size-fit-all design.

【在 h*****a 的大作中提到】
: VL2 in Sigcomm 2009. http://research.microsoft.com/pubs/80693/vl2-sigcomm09-final.pdf
: Microsoft apparently made modification in the server OS so that the ARP
: happens at host level.
: And check the papers that VL2 cites and in which VL2 is cited.
: So far, all the solutions seem to come with significant cost. The cost may
: not be directly on money, but on effort.
: So, back to square one, why do we need large scale L2? Is L3 sufficient?
:
: are

z**r
发帖数: 17771
43
keep us updated how it works. there has to be something on the bare metal
switch, either openflow agent kind of stuff or power on automatic
provisioning, right? so fundmentally, I still feel it's very like Cisco ACI,
maybe in a cheaper way. Very interested in how it works moving forward ...

a
of.
(.

【在 s*****g 的大作中提到】
: 大拿不敢当,大家讨论而已.
: The scenario you described is not a problem at all for puppet, puppet has a
: concept of "template", which are Ruby snippets that can generates
: configuration on the fly based on conditions.
: Think in server management land, there are a lot configuration duplication
: across servers, which configurations go before/after which configurations,
: what configuration depend on what other configuration is well taken care of.
: I can not imagine any large scale data center deployment will need human
: interaction, the configuration should be generated based on topology graph(.
: DAT file).

s******v
发帖数: 4495
44
我觉得这个发展方向是对的,这个就是Server过去10年的发展的重演,我不认为
networking有什么特殊的,会背道而驰
上次听那个netflix的人讲openconnect,他们对Cisco的产品觉得是不错,但是1)贵;
2)复杂,而他们的需求很简单不需要那么多feature,最好是自己能加自己的feature
。像这样的产品,我相信他们会感兴趣的
不知道cumulus有在做这个SDN吗?是open source的吗?
拿这个trading floor做例子,说这个cumulus没有希望,我觉得没有道理。trading是
网络里面要求最高,price in-sensitive的客户,最不适合这种模式的
新事物肯定有这样的那样的问题,没有configure replace,我们可以自己写个open
source的;没有support,我们这样的工程师可以start biz to support. 这些问题又
不是什么解决不了的问题,反而可能是个机会

you
come

【在 s*****g 的大作中提到】
: Not sure you guys heard about Cumulus Networks, its own Linux(Based on
: Debian) runs on 64 port 10G OEM switches, user can then run open source
: routing protocols(Bird, Quagga etc), the switch costs less than half of
: similar products offered by traditional vendors, initially it targets at
: data center TOR switch market.
: I think it will get some traction especially for those customers who are
: very cost sensitive, it is also going to be useful on Internet edge, if you
: want to do BGP traffic engineering on the fly, no equipment vendor can come
: closer in terms of program-ability comparing to a Linux router, a lot of
: magic can happen here.

s*****g
发帖数: 1055
45
No, not that complicated at all ... In my opinion, Cumulus's main
contributions are: 1) they wrote ONIE (you can think of ONIE is BIOS+PXE on
server) which is a powerful and flexible boot loader -- a strip down Linux,
I think; 2) they wrote a hardware abstraction layer such that Linux can
control interfaces and does hardware forwarding for L2 and L3on supported
platforms(based on Broadcom Trident chip).

ACI,

【在 z**r 的大作中提到】
: keep us updated how it works. there has to be something on the bare metal
: switch, either openflow agent kind of stuff or power on automatic
: provisioning, right? so fundmentally, I still feel it's very like Cisco ACI,
: maybe in a cheaper way. Very interested in how it works moving forward ...
:
: a
: of.
: (.

v**n
发帖数: 951
46
ucs的老员可能会笑出声来。。。
anyway,
onie 应该有Intel的相应方案。ucs也是这个思路
你前边说的价钱和灵活性都是valid的points.
有没有想过opex? 感觉是低不了。

on
,

【在 s*****g 的大作中提到】
: No, not that complicated at all ... In my opinion, Cumulus's main
: contributions are: 1) they wrote ONIE (you can think of ONIE is BIOS+PXE on
: server) which is a powerful and flexible boot loader -- a strip down Linux,
: I think; 2) they wrote a hardware abstraction layer such that Linux can
: control interfaces and does hardware forwarding for L2 and L3on supported
: platforms(based on Broadcom Trident chip).
:
: ACI,

d****i
发帖数: 1038
47
security for open OS and platforms will be a concern. Linux is full of
security holes, especially if you open all the hardware capabilities to the
customers. If you try to tighten it, then you'll lose some openness. for
small vendors selling to small customers, this won't be too much issue, but
once small vendors become big, it will raise many question marks.

on
,

【在 s*****g 的大作中提到】
: No, not that complicated at all ... In my opinion, Cumulus's main
: contributions are: 1) they wrote ONIE (you can think of ONIE is BIOS+PXE on
: server) which is a powerful and flexible boot loader -- a strip down Linux,
: I think; 2) they wrote a hardware abstraction layer such that Linux can
: control interfaces and does hardware forwarding for L2 and L3on supported
: platforms(based on Broadcom Trident chip).
:
: ACI,

a**********k
发帖数: 1953
48
This is what ACI/APIC aims to solve, IMHO.

datacenter

【在 h*****a 的大作中提到】
: I guess the question is how can you avoid tearing down the entire datacenter
: to change switch configs. One difference between the serverland and
: networkland is that in network the changes often need to be coordinated
: among multiple devices, and need to happen in a certain order to avoid
: service interruption. Does Puppet take care of this aspect or do I have to
: build some overlay on top?
:
: bare-
: hardware,
: network

a**********k
发帖数: 1953
49
I tend to agree with you. That's the trend although it will
take years to mature.
I would believe someone will buy cumulus, VMware? Brodcom?
Google? It would change the landscape overnight if any of
these events happens.

feature

【在 s******v 的大作中提到】
: 我觉得这个发展方向是对的,这个就是Server过去10年的发展的重演,我不认为
: networking有什么特殊的,会背道而驰
: 上次听那个netflix的人讲openconnect,他们对Cisco的产品觉得是不错,但是1)贵;
: 2)复杂,而他们的需求很简单不需要那么多feature,最好是自己能加自己的feature
: 。像这样的产品,我相信他们会感兴趣的
: 不知道cumulus有在做这个SDN吗?是open source的吗?
: 拿这个trading floor做例子,说这个cumulus没有希望,我觉得没有道理。trading是
: 网络里面要求最高,price in-sensitive的客户,最不适合这种模式的
: 新事物肯定有这样的那样的问题,没有configure replace,我们可以自己写个open
: source的;没有support,我们这样的工程师可以start biz to support. 这些问题又

b******a
发帖数: 153
50
Cumulus proposition is very focusing on TOR switch (i.e access switch) where
traditional Cisco or other network vendor earned most revenue in DC. Most
people evaluating Cumulus are not ready to move full network to Cumulus
software on white box yet for obvious reason.
The move Facebook did in OCP(open compute project) is direct collision
course to Cumulus. FB will release its switch software back to community for
free. But guess what, even FB run its switch software on some vendor's
switch not white box. Even its switch software only take cares FB's L2
access switch in DC. For L3, FB still buying vendors' switch.
相关主题
怎样测试network collision?要说多少次。。。
Juniper QFrabric 是咋回事?没人讨论这个消息?
大家看好Arista Networks, Palo Alto Networks 吗?Tor network隐形浏览安全吗?
进入EmergingNetworking版参与讨论
L******t
发帖数: 1985
51
An excerpt from ioshint.net: http://blog.ipspace.net/2014/08/vmware-evorail-one-stop-shopping-for.html
Why would I buy VMware EVO:RAIL?
In a word: single point of blame ;) It goes against everything HP, Gartner,
SDN evangelists, and whitebox switching aficionados tell us, but sometimes
it makes more sense to pay more and have peace of mind (not to mention
uptime) than to troubleshoot the hidden intricacies of home-brewed
concoctions.

where
for

【在 b******a 的大作中提到】
: Cumulus proposition is very focusing on TOR switch (i.e access switch) where
: traditional Cisco or other network vendor earned most revenue in DC. Most
: people evaluating Cumulus are not ready to move full network to Cumulus
: software on white box yet for obvious reason.
: The move Facebook did in OCP(open compute project) is direct collision
: course to Cumulus. FB will release its switch software back to community for
: free. But guess what, even FB run its switch software on some vendor's
: switch not white box. Even its switch software only take cares FB's L2
: access switch in DC. For L3, FB still buying vendors' switch.

Q*******e
发帖数: 939
52
Cisco UCS Mini and Dell PowerEdge VRTX already deliver such
infrastructure in hardware.
What's EVO: RAIL's advantage -- Cheaper, easy to management?
Buy Nutanix's SuperMicro hardware,
Blames VmWare for that?

,

【在 L******t 的大作中提到】
: An excerpt from ioshint.net: http://blog.ipspace.net/2014/08/vmware-evorail-one-stop-shopping-for.html
: Why would I buy VMware EVO:RAIL?
: In a word: single point of blame ;) It goes against everything HP, Gartner,
: SDN evangelists, and whitebox switching aficionados tell us, but sometimes
: it makes more sense to pay more and have peace of mind (not to mention
: uptime) than to troubleshoot the hidden intricacies of home-brewed
: concoctions.
:
: where
: for

d****i
发帖数: 1038
53
yesterday's bash CVE may have huge impact on these linux based commoditized
switches.

the
but

【在 d****i 的大作中提到】
: security for open OS and platforms will be a concern. Linux is full of
: security holes, especially if you open all the hardware capabilities to the
: customers. If you try to tighten it, then you'll lose some openness. for
: small vendors selling to small customers, this won't be too much issue, but
: once small vendors become big, it will raise many question marks.
:
: on
: ,

s*****g
发帖数: 1055
54
How? network devices are not servers, they don't expose services to the
public.

commoditized

【在 d****i 的大作中提到】
: yesterday's bash CVE may have huge impact on these linux based commoditized
: switches.
:
: the
: but

d****i
发帖数: 1038
55
actually one of the major concern of this CVE is for routers/switches, where
the upgrade/patch is not so easy as servers, especially for those routers/
switches that have enabled some web services for management/configuration
etc, which are more and more popular.

【在 s*****g 的大作中提到】
: How? network devices are not servers, they don't expose services to the
: public.
:
: commoditized

m**k
发帖数: 290
56
我们买router的必须要求之一就是web管理缺省禁用。
F5算是特例,它的命令行实在不堪,非常混乱。
不过F5的web只对内网开放,这个bug影响也不大。
真正受影响的是一些古老的网站,仍旧用cgi。网络这么
大,这种网站倒是不少。

where

【在 d****i 的大作中提到】
: actually one of the major concern of this CVE is for routers/switches, where
: the upgrade/patch is not so easy as servers, especially for those routers/
: switches that have enabled some web services for management/configuration
: etc, which are more and more popular.

f*****m
发帖数: 416
57
as more and more vendors are adopting REST APIs/RESTCONF, in the future,
similar issues will impact the network devices for sure.

【在 m**k 的大作中提到】
: 我们买router的必须要求之一就是web管理缺省禁用。
: F5算是特例,它的命令行实在不堪,非常混乱。
: 不过F5的web只对内网开放,这个bug影响也不大。
: 真正受影响的是一些古老的网站,仍旧用cgi。网络这么
: 大,这种网站倒是不少。
:
: where

1 (共1页)
进入EmergingNetworking版参与讨论
相关主题
Another real world problem思科的ACI
Cisco's italian flavor感觉这是20年来网络界最大的技术变动时代。
VMware to buy Nicira for $1.26B求审稿机会 (Ethernet networking, switch/router, data center)
SDN and Cloud Networking关于SNMP
ACI vs. NSX+Cumulus在一个device里面同时实现router和switch有什么好处?
关于SDN 的预言怎样测试network collision?
职业规划Juniper QFrabric 是咋回事?
John Chambers needs to go大家看好Arista Networks, Palo Alto Networks 吗?
相关话题的讨论汇总
话题: cumulus话题: linux话题: switch话题: network话题: switches