w*f 发帖数: 111 | |
a***n 发帖数: 262 | 2 My simple analogy
QFabric is like wireless controller based architecture+.
node is thin/thick AP - Aruba AP
interconnect is like controller - Aruba Controller
window/view like NMS - Aruba Airwave
【在 w*f 的大作中提到】 : 谢谢
|
z**r 发帖数: 17771 | 3 上个星期出差,和负责Qfabric的一个manager吃了个饭,不过没聊细节。大体上似乎是
把若干个switch用QFabric连起来,达到一个virtual switch的效果?而且没有地理位
置的限制?
【在 w*f 的大作中提到】 : 谢谢
|
B*****R 发帖数: 1539 | 4 u work for jnpr now?
【在 z**r 的大作中提到】 : 上个星期出差,和负责Qfabric的一个manager吃了个饭,不过没聊细节。大体上似乎是 : 把若干个switch用QFabric连起来,达到一个virtual switch的效果?而且没有地理位 : 置的限制?
|
L******t 发帖数: 1985 | 5 Must be jnpr promoting QFabric based switch to zher's company.
Csco would be doing similar thing soon if not already I guess.
【在 B*****R 的大作中提到】 : u work for jnpr now?
|
z**r 发帖数: 17771 | 6 haha, I'd love to
【在 L******t 的大作中提到】 : Must be jnpr promoting QFabric based switch to zher's company. : Csco would be doing similar thing soon if not already I guess.
|
L******t 发帖数: 1985 | 7 zher, which carrier (I assume) are you working for? PM is okay if you do not
want to disclose in public. ;-)
Such flat data center switch is not for everyone, more for certain
enterprises than for carriers as I understand it. Why would your company
want it?
【在 z**r 的大作中提到】 : haha, I'd love to
|
z**r 发帖数: 17771 | 8 did I say my company wanted it? hoho, you don't know who I am with? ...
not
【在 L******t 的大作中提到】 : zher, which carrier (I assume) are you working for? PM is okay if you do not : want to disclose in public. ;-) : Such flat data center switch is not for everyone, more for certain : enterprises than for carriers as I understand it. Why would your company : want it?
|
L******t 发帖数: 1985 | 9 Man, I didn't know.
But a little google goes a long way. At least I'm not wrong at the strong SP
background part. :)
【在 z**r 的大作中提到】 : did I say my company wanted it? hoho, you don't know who I am with? ... : : not
|
t*******r 发帖数: 3271 | |
|
|
L******t 发帖数: 1985 | 11 Wrong at what please? It's work time so I assume I can use English.
【在 t*******r 的大作中提到】 : 楼上又错了~
|
t*******r 发帖数: 3271 | 12 不是work time你也没用中文啊
=======================
发信人: LieHeart (莱因哈特), 信区: EmergingNetworking
标 题: Re: Juniper QFrabric 是咋回事?
发信站: BBS 未名空间站 (Tue Mar 8 02:03:29 2011, 美东)
Man, I didn't know.
But a little google goes a long way. At least I'm not wrong at the strong SP
background part. :)
======================= |
z**r 发帖数: 17771 | 13 can you pm me what you found via google? haha
SP
【在 L******t 的大作中提到】 : Man, I didn't know. : But a little google goes a long way. At least I'm not wrong at the strong SP : background part. :)
|
a**********k 发帖数: 1953 | 14 All companies are working hard to provide that now.
For instances, JNPR's QFabric, BRCD/FDRY's VCS, CSCO's
nexus based solution etc. Any predication who will be
the winner in the Data center front?
【在 z**r 的大作中提到】 : 上个星期出差,和负责Qfabric的一个manager吃了个饭,不过没聊细节。大体上似乎是 : 把若干个switch用QFabric连起来,达到一个virtual switch的效果?而且没有地理位 : 置的限制?
|
s*****g 发帖数: 1055 | 15 This is my prediction: Cisco's FabricPath will capture majority of the
market, simply because its marketing power and being incumbent. Juniper will
follow with BRCD being a distant 3rd. Problem with BRCD is that it yet to
gain trust from potential customers.
【在 a**********k 的大作中提到】 : All companies are working hard to provide that now. : For instances, JNPR's QFabric, BRCD/FDRY's VCS, CSCO's : nexus based solution etc. Any predication who will be : the winner in the Data center front?
|
a**********k 发帖数: 1953 | 16 Stacking systems, and to some extent, multi-chassis LAG are
just some simpler versions of full-mesh virtual switches.
Stacking and MC-LAG are common features nowadays.
【在 z**r 的大作中提到】 : 上个星期出差,和负责Qfabric的一个manager吃了个饭,不过没聊细节。大体上似乎是 : 把若干个switch用QFabric连起来,达到一个virtual switch的效果?而且没有地理位 : 置的限制?
|
W****2 发帖数: 297 | 17 QFrabric is to compete Cisco's TRILL, not FabricPath bah. |
L******t 发帖数: 1985 | 18 QFabric is still not the same thing as TRILL.
TRILL is two-tier; QFabric is one-tier, totally flat. But TRILL is not the
end of story on Cisco side.
【在 W****2 的大作中提到】 : QFrabric is to compete Cisco's TRILL, not FabricPath bah.
|
L******t 发帖数: 1985 | 19 The law is, talk to tonyblair in Chinese while off work. I can talk anything
to anyone else anytime.
SP
【在 t*******r 的大作中提到】 : 不是work time你也没用中文啊 : ======================= : 发信人: LieHeart (莱因哈特), 信区: EmergingNetworking : 标 题: Re: Juniper QFrabric 是咋回事? : 发信站: BBS 未名空间站 (Tue Mar 8 02:03:29 2011, 美东) : Man, I didn't know. : But a little google goes a long way. At least I'm not wrong at the strong SP : background part. :) : =======================
|
t*******r 发帖数: 3271 | 20 你说啥?
中国人跟中国人说英文不累啊.
anything
【在 L******t 的大作中提到】 : The law is, talk to tonyblair in Chinese while off work. I can talk anything : to anyone else anytime. : : SP
|
|
|
w*f 发帖数: 111 | 21 update
answer to "What is Juniper QFabric in layman's term?"
Rakesh Singh:
Qfabic is a new network architecture from Juniper which makes the whole data
center behave as if it were one giant switch.
There are 3 componets to QFabric: Node, Interconnect and Director. To take
the analogy of a typical Chassis-Switch, the QF/Nodes are the line card in
chassis, The QF/Interconnect is the Switch Fabric and the QF/Director is the
CPU card.
The typical chassis switch is limited to 8 or 16 line cards and the sacle is
restricted. QFabric starts by methodically breaking the pieces of chassis.
On the data plane side, it starts with the separation of the fabric from the
line cards. Instead of using copper traces to connect things, Qfabric use
fiber optics. This way it can support 128 linecards instead of typical 8
line cards.
Line cards are optimized for server connectivity — a 1RU, top-of-the-rack
design with ethernet and FC interfaces. In between we have 4 nos of 40Gig
fibers conecting each line card to the Interconect with OM4 standard fibers.
The Interconnect basically is a Ethernet-in and Ethernet-out and works
exactly the same as a fabric inside a chassis switch. The interconnects are
not network switches, they are a simplified device and with only one
function - Transport. Since they dont run any protocols they can be made
denser, faster and cheaper. For larger scale we can have upto 4 interconnetc
chassis, with one fiber connection each one. This configuration would
support 128 of the QF/Node devices. Each QF/Node device has 48 10-Gig ports.
The QF/Director is a a x86 appliance which directly controlls and federates
state information to all the QF/Nodes using out of band network.
Everything in the real world connects to the QF/Node, and the QF/
Interconnect is simply transport. Most of the processing is done at the
ingress port and some processing is done at the egress port, just like in a
typical switch.
Source:
Juniper.net - http://communities.junipe r.net/t...
NetworkWorld - http://www.networkworld.c om/news...
To see the question page with all answers, visit:
http://www.quora.com/l/mNXjnYVJ |
a***n 发帖数: 262 | 22 This is good explanation.
Cisco is working on similar thing at least on ASR
as far as I know.
data
the
is
【在 w*f 的大作中提到】 : update : answer to "What is Juniper QFabric in layman's term?" : : Rakesh Singh: : Qfabic is a new network architecture from Juniper which makes the whole data : center behave as if it were one giant switch. : There are 3 componets to QFabric: Node, Interconnect and Director. To take : the analogy of a typical Chassis-Switch, the QF/Nodes are the line card in : chassis, The QF/Interconnect is the Switch Fabric and the QF/Director is the : CPU card.
|
d****i 发帖数: 1038 | 23 Isn't this idea similar to what openflow or openvswitch wanted to achieve?
data
the
【在 w*f 的大作中提到】 : update : answer to "What is Juniper QFabric in layman's term?" : : Rakesh Singh: : Qfabic is a new network architecture from Juniper which makes the whole data : center behave as if it were one giant switch. : There are 3 componets to QFabric: Node, Interconnect and Director. To take : the analogy of a typical Chassis-Switch, the QF/Nodes are the line card in : chassis, The QF/Interconnect is the Switch Fabric and the QF/Director is the : CPU card.
|
L******t 发帖数: 1985 | 24 Kind of. Theoretically the nodes (TOR) can be heterogeneous across vendor/
product line, as long as certain protocols are observed. But the
practicability is pretty low.
【在 d****i 的大作中提到】 : Isn't this idea similar to what openflow or openvswitch wanted to achieve? : : data : the
|
v**n 发帖数: 951 | 25 openflow switch + NOX + vSwitch ?
【在 a***n 的大作中提到】 : My simple analogy : QFabric is like wireless controller based architecture+. : node is thin/thick AP - Aruba AP : interconnect is like controller - Aruba Controller : window/view like NMS - Aruba Airwave
|
v**n 发帖数: 951 | 26 hasn't it been proven failure in most of the service provider environments
and non-google enterprise
environment?
【在 d****i 的大作中提到】 : Isn't this idea similar to what openflow or openvswitch wanted to achieve? : : data : the
|
t*********e 发帖数: 1136 | 27 There might be the following issues with this model:
1. Wiring will be too complex. QFabric essentially externalizes the guts of
a switch. It is like telling the network admins to design and assemble a
switch themselves. Is that an attractive proposition? And I don't see what
advantage this has over normal high-density modular switches.
2. Latency and reliability will be a problem. Switch latency is approaching
zero. More wiring means slower switch. This is against the trend. The more
wiring also the less reliability.
4. If the director is just a policy management software, then it is just a
hype. Otherwise, if it functions as a centralized control plane, convergence
could be messy. For example, when link-flap happens, or peering instability
happens, all will be waiting for the director to resolve the routing table.
Significant delays may happen across the data center, and hence traffic
black holes. The central director is a single point of failure.
It seems that the purpose of QFabric is to simplify leaf-spine topologies by
replacing TOR leaf switches with TOR line cards. It's an interesting idea
though. |