FAQ FAQ  Forum Search   Register Register  Login Login

Intel's Larrabee

Post Date: 2008-08-05

 Post Reply Post Reply
Author
  Topic Search Topic Search  Topic Options Topic Options
Harry View Drop Down
Groupie
Groupie


Joined: 03 Aug 2007
Online Status: Offline
Posts: 190
  Quote Harry Quote  Post ReplyReply bullet Topic: Intel's Larrabee
    Posted: 05 Aug 2008 at 11:27am

http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3367&p=1

So I tried to read this and my conclusion was-- WHAT?
 
I don't understand.  Is this going to be a seperate GPU card like all the others that will go into a pci slot?  Is it a mobo chip?  Both?
Cool Master HAF 932
Core i7 965 DSO OC
Water cooled CPU
ASUS Rampage II Extreme
6 GB DDR3 1600 OCZ
1000 W Corsair HX
GTX 470
300 GB Raptor
tera 7200rpm
Windows 7
Back to Top
SunfighterLC View Drop Down
DS Veteran
DS Veteran


Joined: 18 Feb 2008
Online Status: Offline
Posts: 1527
  Quote SunfighterLC Quote  Post ReplyReply bullet Posted: 05 Aug 2008 at 11:38am
Its a CPU that can emulate the functions a GPU can do. 
E8500@ 4.03Ghz
XFX 790i Ultra
1000W Corsair HX
2 280 GTX EVGA FTW
4GB OCZ Reaper 1800Mhz
250-80-300GB VR HD
Logitech Z-2300 2.1 Speakers
Asus Xonar 7.1
Hanns-G HG 281D 28" HDMI Monitor
Back to Top
widdlecat View Drop Down
DS Veteran
DS Veteran


Joined: 11 Mar 2008
Online Status: Offline
Posts: 840
  Quote widdlecat Quote  Post ReplyReply bullet Posted: 05 Aug 2008 at 4:44pm
I can see Larrabee as a very useful tool for digital graphics. I'm talking about for digital movie effects and 3d animation for movies. Those people are willing to write programs to make things work their way, so this opens the door for them to play around a lot.
Back to Top
gamerk2 View Drop Down
Groupie
Groupie

Email address used to purchase matched with forums account email.

Joined: 28 May 2008
Online Status: Offline
Posts: 198
  Quote gamerk2 Quote  Post ReplyReply bullet Posted: 06 Aug 2008 at 10:59am
^^ If CUDA didn't already do this, and PhysX didn't already do onboard physics, this would have been a great tool.  I think Intel missed the boat by a few months.
Back to Top
DST4ME View Drop Down
DS ELITE
DS ELITE

Email address used to purchase matched with forums account email.

Joined: 14 Apr 2008
Online Status: Offline
Posts: 36758
  Quote DST4ME Quote  Post ReplyReply bullet Posted: 06 Aug 2008 at 5:07pm
This is what Intel says:


Intel Talks Details on Larrabee

Intel will begin sampling Larrabee in 2008 with products on market in 2009 or 2010


Today there are three main players in the graphics market producing hardware -- Intel, AMD, and NVIDIA. As the market stands right now, only AMD and NVIDIA manufacture discrete graphics cards with Intel sticking exclusively to on-board graphics that are common on the vast majority of notebook and desktop computers in the low and mid-range market.

Intel is looking to change that and will be bringing its own discrete products to market at some point. The discrete graphics cards from Intel will use the Larrabee architecture and according to eWeek; discrete graphics cards using the Larrabee architecture won’t be available until 2009 or 2010. EWeek does say that Intel will be sampling Larrabee in 2008.

Intel has begun talking about the Larrabee architecture and naturally, it feels that Larrabee is the best architecture out there. What makes Intel so enthused by its architecture is that the Larrabee core is based on the Pentium CPU and uses x86 cores. The use of x86 cores means that programmers and game developers can use the familiar programming languages -- like C and C++ -- that have been in use for a number of years, rather than having to learn a new programming language like NVIDIA's CUDA.

Intel says that its Larrabee is a many-core processor and eWeek reports that it will likely containing ten or more individual x86 processor cores inside the silicon package. Discrete graphics cards using the Larrabee architecture will initially be aimed at the gaming market. That means Intel is directly targeting AMD and NVIDIA with Larrabee.

Intel says Larrabee will support both DirectX and OpenGL APIs and it is encouraging developers to design new and graphic intense applications for the architecture. Larrabee will also bring a new era in parallel computing with developers being able to write applications for it using C and C++ programming languages.

Intel has combined the throughput of a CPU with the parallel programming ability of a GPU. Intel says that Larrabee will also contain vector-processing units to enhance the performance of graphics and video applications. The x86 cores feature short instructional pipelines and can support four execution threads with each core. Each core can also support a register set to help with memory. The short instructional pipeline allows faster access to L1 cache with each core.

Intel says that all cores on Larrabee will share access to a large L2 cache partitioned for each of the cores. The arrangement of the Larrabee architecture allows it to maintain an efficient in-order pipeline, yet allows the processor some benefits of an out-of-order processor to help with parallel applications. Communication between all of the Larrabee cores will be enhanced by using what Intel calls a bidirectional ring network.

Larry Seiler from Intel says, "What the graphics and general data parallel application market needs is an architecture that provides the full programming abilities of a CPU, the full capabilities of a CPU together with the parallelism that is inherent in graphics processors. Larrabee provides [that] and it's a practical solution to the limitations of current graphics processors."

According to News.com, one Intel slide shows that the performance of the Larrabee architecture scales linearly with four cores offering twice the performance of two cores. According to News.com core counts for Larrabee will range from 8 to 48 -- the exact core count for the Larrabee architecture is unknown at this time.



---------------------------------------------------------


This is what Nvidia's response was :

well looks like nvida has a responce:

NVIDIA Clears Water Muddied by Larrabee

NVIDIA says its GPUs are in fact programmable in C language


Yesterday, DailyTech ran a story about details on Intel's upcoming Larrabee architecture for the graphics market. One of Intel's most important talking points when it plays up the benefits of Larrabee over NVIDIA's GPUs is the fact that NVIDIA's GPUs require developers to learn a new programming language called CUDA.

Intel says that with its Larrabee architecture developers can simply program in C or C++ languages for just as they would for any other x86 processor. According to Intel, the ability to program Larrabee with C or C++ makes it much easier for developers to port applications from other platforms to the Larrabee architecture.

After DailyTech ran the story, NVIDIA wanted to address what it considers to be misinformation when it comes to CUDA. NVIDIA says:

CUDA is a C-language compiler that is based on the PathScale C compiler. This open source compiler was originally developed for the x86 architecture. The NVIDIA computing architecture was specifically designed to support the C language - like any other processor architecture. Competitive comments that the GPU is only partially programmable are incorrect - all the processors in the NVIDIA GPU are programmable in the C language.

NVIDIA's approach to parallel computing has already proven to scale from 8 to 240 GPU cores. Also, NVIDIA is just about to release a multi-core CPU version of the CUDA compiler. This allows the developer to write an application once and run across multiple platforms. Larrabee's development environment is proprietary to Intel and, at least disclosed in marketing materials to date, is different than a multi-core CPU software environment.


Andrew Humber from NVIDIA distilled things a bit further saying, "CUDA is just our brand name for the C-compiler. They aren't two different things."

Humber also pointed out that at NVIDIA's financial analyst day in April it showed an astrophysics simulation running on integrated graphics with an eight-core GPU, a GeForce 8 series GPU with 128 cores and a quad-core CPU. NVIDIA says that the demonstration used exactly the same binary program across the range of GPUs and the exact same source code for the CPU and GPU.



Edited by DST4ME - 06 Aug 2008 at 5:10pm
Back to Top
 Post Reply Post Reply

Forum Jump Forum Permissions View Drop Down



This page was generated in 3.515625E-02 seconds.