Jump to content


  • Posts

  • Joined

  • Last visited

  • Donations


Posts posted by shahed26

  1. Los Angeles (CA) – While most people would be happy just to own a solid state drive, BitMicro says it can now make SSDs with an amazing 6.5 TB of capacity. At the Siggraph trade show held at the Los Angeles Convention Center, a company representative told TG Daily that the custom made drives can have up to 55,000 input/output instructions per second (IOPS) and can have sustained transfers of up to 230 MB/sec.

    The custom drive probably won’t fit in your average desktop because the rep added that it’s about two to three times high as a regular drive. Thankfully the company continues to mass produce drop-in SSD replacements for laptops and desktops. BitMicro’s E-Disk Altima line comes in a variety of connections including SATA, SCSI or Fibre Channel.


  2. Madison (WI) – Researchers from the University of Wisconsin-Madison and Hitachi Global Storage Technologies discovered a new patterning technology that could lead to a new generation of hard drives that are not only easier to manufacture but also feature a storage density that cannot be achieved with traditional media and processes today.

    Future hard drive and disk technologies are among the most fascinating topics in IT. Research in this area does not only affect a cheap mass storage technology we all depend on, but hard drive technology always has been at the center of a discussion of natural limits for their storage capacity and the industry now looks back to a 35 year history of moving predicted natural barriers a few years out into the future.

    My personal favorite in this category is OAW, or Optical Assisted Winchester, a technology that was developed by Quinta, a company that was acquired by Seagate in 1998. Ten years ago, Seagate believed that the natural barrier of storage density in traditional (Winchester) hard drives would be somewhere in the 100 Gb per square inch neighborhood, while the laser-based OAW approach would be able to hit about 250 Gb per square inch. Using a laser beam, OAW was described to heat the surface material before recording data at a specific location, which decreases the need for a stronger magnetic field, as it is the case with growing densities in today's drives.

    Of course, today we know that perpendicular magnetic recording recently bought the industry some time and pushed the natural barrier once again out into the future. 250 Gb per square inch disks are in mainstream production today, while some drives are already exceeding 330 Gb per square inch.

    OAW is not off the table, though, and was moved into a new project called Heat Assisted Magnetic Recording (HAMR) in 2004. Back then, Seagate said that HAMR has the potential to take the platter to densities of 50 terabits per square inch, which translates into storage space for more than 3.5 million high resolution photos, 2800 audio CDs or 1600 hours of movies on a platter with the size of a half dollar (30.61 mm).

    Heat-assisted technology, however, is only one way to think about future hard drives. As it is the case with many technologies that get smaller and smaller today, many believe that production technologies eventually will reach a point at which they cannot scale anymore and new production technologies will be required – technologies that do not scale down, but use a process to build up structures from scratch instead.

    Such an idea is now being pitched by researchers from Researchers from the University of Wisconsin-Madison and Hitachi Global Storage Technologies (HGST). The new production method builds on existing approaches by combining the lithography techniques traditionally used to pattern microelectronics with novel self-assembling materials called block copolymers, the researchers said When added to a lithographically patterned surface, the copolymers' long molecular chains spontaneously assemble into the designated arrangements. According to a paper published in the August 15 issue of Science, the block copolymers pattern the resulting array down to the molecular level, offering a precision unattainable by traditional lithography-based methods alone. They even correcting irregularities in the underlying chemical pattern.

    The result: Nanoscale control theoretically allows the researchers to create higher-resolution arrays capable of holding more information than those produced today.

    "There's information encoded in the molecules that results in getting certain size and spacing of features with certain desirable properties," Paul Nealey, director of the University of Wisconsin-Madison Nanoscale Science and Engineering Center (NSEC), said. "Thermodynamic driving forces make the structures more uniform in size and higher density than you can obtain with the traditional materials."

    Nealey said that block copolymers only need one-fourth as much patterning information as traditional materials to form the desired molecular architecture, which would mean that the manufacturing process could be very efficient. "If you only have to pattern every fourth spot, you can write those patterns at a fraction of the time and expense," he said.

    Before you get too excited about the potential of this technology, no, there were no potential storage numbers mentioned and there is not even an indication when this technology could be available. In fact, it is further out than OAW was back in 1998. But the polymer project apparently was not just playtime for the scientists, but in fact has the objective to create a technology that is industrially viable. "This research addresses one of the most significant challenges to delivering patterned media - the mass production of patterned disks in high volume, at a reasonable cost," says Richard New, director of research at HGST. "The large potential gains in density offered by patterned media make it one of the most promising new technologies on the horizon for future hard disk drives."

    Somehow we feel that this technology is in its very early stages and its future is uncertain. As long as hard drive manufacturers can increase the storage density of today’s hard drives using traditional methods there is no reason for anyone to take a risk with a new technology. And even if we will not see this technology in actual products, the scaling progress made these days is simply stunning.


  3. Chicago (IL) – French website Canardplus today published slides that seem to be part of Intel’s IDF presentation program next week and provide some information on the continuation of Intel’s tick-tock strategy until 2012 and 22 nm processors.

    According to the slides, Intel does not see any roadblocks in its product strategy which is laid out to deliver new processor architectures in even years and die-shrinks in odd years. Nehalem, scheduled for a Q4 release in desktop flavors, is Intel’s new architecture that will carry the company through 2010.

    In 2009, Nehalem will be scaled down from a 45 nm to the 32 nm core Westmere. Sandy Bridge will be the successor of Nehalem’s architecture and debut in 2010 as a 32 nm CPU. Ivy Bridge will shrink Sandy Bridge to 22 nm in 2011 and Haswell will be a completely new architecture that is planned to be introduced in 2012 as a 22 nm chip.

    Next week will be packed with technical news about Nehalem, but the published slides do not reveal much more than what we already know – that Nehalem will be an extremely flexible architecture that can be easily fine-tuned because of its modular approach. The number of cores, memory channels, QPI links, cache size, power management features and integration of graphics capability is adjustable and is likely to bring more variety to Intel’s product line-up than ever before. This new capability should make especially Apple happy, as the company tends to always look for differentiators for its products.

    Sandy Bridge, will bring some interesting new features. Canardplus speculates that it will have eight cores, 512 KB L2 cache and 16 MB L3 cache. Big changes include the additions of Intel’s “Advanced Vector Extensions”, which will bring a switch from 128 bit to 256 bit vectors to increase the floating point performance of the CPUs and extensible new opcode (VEX) to decrease the size of the software code.

    Haswell is Intel’s pitch to keep you interested in what is coming down the road and we do not expect Intel to release lots of details of the architecture. What we do know, however, is that it will be a 22 nm chip that in fact may already be part of the research process at Intel’s Oregon D1D fab. Canardplus says that the new architecture will deliver “revolutionary” power savings and will have the capability to support a co-processor such as a vector accelerator within a single package. There is also FMA (Fused Multiply-Add) functionality, which enables multiplication and addition processing via the same instruction and result in much higher compute density.


  4. Nehalem = i7: Intel unveils new Core processor brand

    Santa Clara (CA) – Intel today unveiled the brand of its upcoming processor: Nehalem, the successor of the Core 2 Duo CPU series, will be introduced as “Core i7” later this year and the company hopes that the new brand will be easily recognized and remembered by customers when they walk into a store to buy a new PC.

    Ever since Gigahertz numbers lost their appeal for AMD’s and Intel’s marketing whizzes, both companies seem to have been desperately looking for a decent sequence numbering system for their products.

    AMD initially chose a strategy to describe its processors with a MHz-like number that was comparable to Intel’s Pentium chips (of course they did not admit that and said it was only a comparison to the preceding AMD CPU generation) and eventually ended up with processors numbers that are not just inconsistent (Phenom 8000 series), but very few can actually understand. Intel’s sequence numbers across all products have been a mess for several years: We doubt that the average sales person at your local Best Buy will be able to tell you what the important differences between Intel’s 2000-, 4000-, 6000- and 8000- series desktop processors are.

    To come up with a new brand back in 2006, Intel’s choice of “Core” was actually smart. Core 2 Duo was always perceived as a simple, trustworthy name that suited the purpose of the mainstream approach of Intel at the time. But, if you think about it, the brand never made sense – Core 2 Duo in essence means “Core 2 2”. Even if you know that Core 2 Duo means that this is a second generation Core processor with two cores, you may ask questions about the name since Core 2 Duo was not the second, but the first generation Core micro-architecture.

    Nehalem is a new architecture and Intel had a an opportunity to come up with a new processor name, which the company actually did. Nehalem will not become the much speculated Core 3. Instead, Intel chose to name its next desktop processors with Bloomfield and Lynnfield cores “Core i7”. Intel said that this is “the first of several new identifiers to come as different products launch over the next year.” According to the manufacturer, the Core i7 processor brand logo will be used for high-performance desktop PCs with a separate black logo for Intel’s highest-end “Extreme Edition.” Intel will continue to add processor model numbers to differentiate each chip.

    Of course we had questions why “i7” and what “i7” means. Our guess is that “i” refers to Intel and 7 … well, we don’t know. It isn’t Intel’s seventh generation chip (that would be the first Pentium 4 that was introduced in 2000 with the 180 nm Willamette core). Intel told us that “i7” was simply chosen because it is “short and sweet”. The company showed some understanding for our confusion over this name choice and promised that i7 would make sense down the road when additional new identifiers are introduced. Intel representatives also admitted that processor buyers need to get familiar with this new name, but the company hopes that, once this happens, they will aim directly for an i7 processor – just like they would know why they would want a 7-series or 5-series BMW. This whole car analogy (interestingly, it is always BMW) always comes up when AMD or Intel introduce a new brand or sequence number and most of the time it is usually the customer’s task to decipher the numbering mess behind the main brand.

    At this time, Intel provides no guidance what i7 means, which other identifiers are in the works and how i7 will evolve over time. Our first impression is that i7 is an emotionless and much more technical name than Core 2 Duo. But we have no idea if that will be the general impression of the market and if it was Intel’s intention to come up a cold and very technical name. However, we do are quite sure that this new processor will create even more confusion for average PC buyers. Core 2 Duos, Core 2 Quads, Pentiums and Celerons are likely to be phased out over time (Intel will keep the Xeon brand, we were told), but they remain available for now. Without a numbering system that works across all processor families and serves as an easy to understand indicator for the performance and features of each processor, i7 will only cause additional confusion that, in our opinion, is absolutely unnecessary. We give Intel the benefit of the doubt that i7 will make more sense in a few months.


  5. ita probably a non-public update and i do remove a hell of alot, i only keep about 6 things. but normally works fine.

    I really think it is the non-public updates. Removing alot is fine aswell, coz i installed vista on my second hdd and removed almost everything except a few, and it also worked fine, including MSN and VS 2008.

    I will integrate all the post sp1 non-public updates to see if thats the case, or something else.

  6. Thats very weird, i never had that problem. I have a MS pre-integrated sp1 vista ultimate x64, and i integrated all post-SP1 hotfixes (including WS4) from Micro Nova. Installed and updated. No problems


    I have installed MSN messenger and also visual studio 2008, no problems so far.

    btw i used this new vLite rc release to make my new vista iso.

  7. Hi

    I verified that all files where in the right places, and they where.

    I ran the ax/dll registration part once more.

    I ran the inf files installation once more.

    I ran some dependent inf file installs, like ks.inf, kscaptur.inf ...

    Rebooted twice, and suddenly it found my tv channels :=)

    Only thing Im missing now is sound :=)

    Thanx for all help!

    Sound problem could be a due to missing codecs, as server does not have these codecs. Download Vista codec pack or k-lite codec to make sound work and configure your tv application decoding properties to use right codecs. I would recommend k-lite codec, as it has cyberlinks SP video decoder (currently the best for video decoding and quality for tv apps) or optionally you can download that from Vista Codecs Packs Home page and use vista codec pack as this is the best codec pack i used ever.

    That should do the trick..

  8. Opinion – Intel unveiled some key details about its upcoming Larrabee accelerator/discrete graphics architecture earlier this week, sparking speculation how this new technology will stack up to what is already out there in the market. We had some time to digest the information and talk to developers - and it appears to be clear that there is more to Larrabee than what meets the eye. For example, we are convinced that Larrabee is much more than a graphics card and will debut as a graphics card just for one particular reason. While consumers may be interested in the graphics performance, developers are more interested in standard versus peak performance and how much performance will be provided in exchange for how much coding effort: If Larrabee can deliver what Intel claims it can do, and if Intel can convince developers to work with this new architecture, then general computing could see its most significant performance jumps yet.

    Intel engineers are visibly excited about the fact that they could finally disclose some details about Larrabee this week. And while we agree that it is a completely new computing product that will help to usher in a new phase of high-performance computing on professional and consumer platforms, there is the obvious question whether Larrabee can be successful and what Intel will have to do to make it successful. If you look at this question closely, the bottom line is very simple: It comes down to people purchasing the product and developers writing software for it.

    It is one of those chicken-and-egg-problems. Without applications, there is no motivation for consumers to buy Larrabee cards. Without Larrabee cards being bought by consumers, there is no motivation for developers to build applications. So, how do you get consumers to buy Larrabee and how do you get developers to write for Larrabee?

    The answer to the first question is relatively simple. Intel is positioning Larrabee in a market that already exists and that has plenty of applications: Gaming. Intel told us that Larrabee will support all the existing APIs (and some more) which should enable gamers to run games on it. How well Larrabee can run games is unknown to us and a “secret”, according to Intel. But it would be strategic and financial suicide for Intel if Larrabee would not be competitive with the best of graphics cards at the time of its debut. Nvidia for example, believes that Larrabee needs to be a “killer product” in order to be successful. Will that be the case? No one knows. Not even Intel – at least not if the company does not have spies in the right spots within Nvidia.

    However, graphics appears to be only the first phase for Larrabee. Those extra features, which Intel says will allow developers to innovate, cover the more interesting part. In the end, Larrabee is a potent accelerator board that just happens to be able to run game graphics and playback videos. But Intel wants developers to do much more with it: Like GPGPUs, Larrabee can ignite a whole new wave of floating-point accelerated (visual computing) applications far beyond gaming. If Intel can get dozens of millions Larrabee cards into the market - by selling Larrabee as a graphics card – the company may be able to motivate developers to go beyond games and take advantage of the advanced cGPU features of Larrabee.

    That, of course, is a milestone Nvidia has reached already. There are more than 70 million GeForce GPUs on consumer PCs that can be used as general purpose GPUs (GPGPUs) and support applications developed and compiled with CUDA. Nvidia has been actively educating students and lecturers on CUDA around the country, it has developed course material and Nvidia employees taught CUDA classes themselves for several years at numerous U.S. universities. It cannot be denied that Nvidia has the time advantage on its side. By the time the first Larrabee card will be sold, Nvidia will have sold more than 100 million CUDA capable cards. The company has learned its lessons and it has established developer relations. Intel will have to catch up and it plans to do so by selling Larrabee as a graphics card to consumers and promoting Larrabee as a product that can be as easily programmed as your average x86 processor. If Larrabee is a success with consumers, developers suddenly will have a choice (we leave ATI’s stream processors out of consideration since AMD still has work to do to create a mainstream appeal for its stream processor cards, work on its software platform and promote its Radeon chips as GPGPU engines.) Which way will they go?

    After our initial conversations with a few developers, the trend appears to be the quest for the free lunch. In other words, developers are looking for a platform that is easily approachable and that, in an ideal case, offers them an opportunity to run and scale their applications in a multi-threaded environment without having to understand the hardware. In a way, this is exactly what Nvidia is promising with CUDA and that is what Intel is saying about Larrabee. Nvidia always said that CUDA is just a set of C++ extensions and Intel says that Larrabee runs plain C++ in a way an x86 processor would run it. If you ask Nvidia about Larrabee, you will hear that the company doubts that this is the case and claims that applications will not scale as easily as Intel claims, at least not with the proper hardware knowledge. If you ask Intel about CUDA, you may hear that the explanation about C++ is an oversimplification and that you do need knowledge of the hardware - the memory architecture, for example - to exploit the horsepower of the GPGPU.

    Our developer sources partially confirmed and partially denied those claims. On Nvidia’s side, it appears that a carelessly programmed CUDA application still runs faster than what you would get from a CPU, but you do need sufficient knowledge of the GPU hardware, such as the memory architecture, to get to the heart of the acceleration. The same is true for Intel’s Larrabee: The claim that developers simply need x86 knowledge to program Larrabee applications is not entirely correct. While Larrabee may accelerate even applications that are not programmed for it, the purpose of Larrabee is to access its complete potential and that is only possible through the vector units, which require vectorized code. Without vectorization, you will have to rely on a compiler to do that for you and it is no secret that this automated version will rarely work as well as hand-crafted code. Long story short: Both CUDA and Larrabee development benefit from understanding the hardware. Both platforms promise decent performance without fine tuning and without knowledge of the hardware. But there seems to be little doubt at this time that developers who understand the devices they are developing for will have a big advantage.

    Interestingly, we talked to developers who believed that Larrabee will not be able to handle essential x86 capabilities, such as system calls. In fact, Intel’s published Larrabee document clearly supports this conclusion. However, Intel confirmed that Larrabee can do everything an x86 CPU can do and some of these features are actually being achieved through a micro OS that is running on top of the architecture. We got the impression that the way how Larrabee will support essential x86 features and how well they are processed will be closely watched by developers.

    A key criticism of CUDA by Intel is a lack of flexibility and the fact that its compiler is tied to GPUs. This claim may be true at this time, but could evaporate within a matter of days. Nvidia says CUDA code can run on any multi-core platform. To prove its point, the company is about to roll out an x86 CUDA compiler. Our sources indicate that the software will be available as a beta by the time the company’s tradeshow Nvision08 opens its doors. In that case, CUDA could be considered to be much more flexible than Larrabee, as it will support x86 as well as GPUs (and possibly even AMD/ATI GPUs.) Even if Intel often describes x86 programming as the root of all programming, the company will have to realize that CUDA may have an edge at this point. The question will be how well CUDA code will run on x86 processors.

    There is no doubt that Intel will put all of its marketing and engineering might behind Larrabee. In the end, it is what we perceive a product that is a bridge into the company’s many-core future. It will be critical for the company to succeed on all fronts and to win all of its battles. The hardware needs to deliver and the company will have to convince developers about the benefit of its vector units in Larrabee as it has convinced developers to adjust to SSEs in its x86 CPUs. There are roadblocks and Intel’s road to success is not a done deal.

    And, of course, Nvidia is not sitting still. The green team is more powerful than ever and has the leading product in the market right now. I doubt that Nvidia will simply hand over the market to Intel. However, AMD has learned what enormous impact Intel’s massive resources can have. Nvidia should not take this challenge lightly and accelerate its development efforts in this space. I have said it before and I say it again: GPGPU and cGPU computing is the most exciting opportunity in computing hardware I have seen in nearly two decades and there is no doubt in my mind that it will catch fire in corporate and consumer markets as soon as one company gets it right.

    Personally, I don’t care whether it will be Nvidia or Intel (or AMD?). The fact is that competition is great and I would hate to see only one company in this segment. The market entry of Intel will drive innovation and it will be interesting to see two of the most powerful semiconductor firms compete in this relatively new market.

    So, who will offer free lunch?


  9. San Francisco (CA) – IBM prepares a big rollout of its Lotus Symphony office suite and what better marketing to support the launch than an anti-Microsoft pitch? IBM said it has “joined forces” with big Linux distribution providers, such as Canonical and Novell, to deliver “Microsoft-free personal computing choices“ by 2009. There are always reasons why you should love to hate Microsoft, apparently.

    IBM said that it has brought Canonical/Ubuntu, Novell and Red Hat on board to, in combination with their hardware partners, to ship computing devices that are entirely free of any Microsoft product into the 1-billion-unit desktop market. According to IBM, market forces are shifting and there is “growing demand for economical alternatives to costly Windows and Office-based computers.” The company claims that “Linux is far more profitable for a PC vendor and the operating system is better equipped to work with lower cost hardware than new Microsoft technology.”

    IBM’s pitch includes a pre-loaded PC that comes with the firm’s Open Collaboration Client Solution (OCCS), which includes Lotus Notes, Lotus Symphony and Lotus Sametime. The PCs will also be available with the Linux operating system of each distributor and software applications and installation services from the local partners in each market. The final product will be branded by the local IT firms that bring it to market. In addition, customers, independent software vendors (ISVs) and systems integrators have the choice of developing applications using Lotus Expeditor based on the open source Eclipse programming model, IBM said.

    "We are pleased with the uptake among customers including enterprises, governments, small businesses, and partners adopting OCCS powered by Red Hat's enterprise Linux desktop," said Scott Crenshaw, vice president at Red Hat, in a prepared statement. "Customers are demanding a Microsoft-less PC, and we have responded with our reliable, secure Linux solution through our top channel partners worldwide, building on the success we've seen in Eastern Europe and other markets."

    Operating system choices are always a benefit for the market and consumers and there is no doubt that in our mind that Microsoft’s Windows Vista operating system has made Microsoft much more vulnerable to alternatives such as Mac OS X and Linux. But IBM’s latest campaign is a bit simple and goes beyond our comfort level. Just because there is a Microsoft sticker on a PC isn’t necessarily bad in every scenario. “Microsoft-free” may be an effective marketing pitch, but the success of these PCs will depend on much more than the hope that people will hate Microsoft enough to jump ship, especially in the desktop market.


  10. @ shahed26 & nuhi: Just tested to leave in WMP and remove DRM. It plays other movies fine, but on first login I got 2 register errors about DRM, when starting and configuring WMP I got the same again. It does not fully register WMP in the system (file reg, shortcuts). So I will now reinstall and leave DRM saved. :)

    Same here. Thats the only DRM errors i get on first logon and when i open WPM11 i also get a error about DRM. After that, its gone and no error pops up when i logon or open WMP

  11. Hi

    Tested both progdvb 5 and GB-PVR, but none of them does the job any better :=(

    Both finds my device, but no channels found (Yes, I have tested the card/connection with XP, and it works fine there).

    Progdvb5 searches, but no channels found. GB-PVR complains that it cannot connect to the device. It finds, and list the device.


    MSVidCtl.dll.mui (The most important file located “C:\Windows\System32\en-US) on vista, make sure that file is there copied properly on server 2008 in servers C:\Windows\System32\en-US directory .

    And also the "MSVidCtl.dll" the main file for scanning channels to work, make sure that file is also copied properly in C:\Windows\System32 directory. If your on x64 server then make sure the x86 version of MSVidCtl.dll file is in syswow64 as well.

    After that run the Setup BDA.bat again. (make sure you run the right bat file as there is one for 64bit and one for 32bit systems)

  12. update posted v0.471

    -fixed bug that Resource Hacked all addons, regardless of choice


    i think you have just about fixed all the possible bugs there was there before. Its working fine for me now.

    Now you can maybe start adding some more features... :rolleyes:



  13. Hi again

    I extracted the files from the Vista x64 wim image. Used 7ZIP.

    Sorry for not mentioning that.

    Dont use tv applications supplied with your tv card, they dont work on server (because they rely heavily on Microsofts BDA files) Use 3rd party apps like progdvb, dvbdream, or the best one dvbviewer. They work fine

    Please read instruction file carefully inside the BDA archive. It clearly says that after restart when you get prompt "found devices" just ignore and cancel and restart again.

    let me know!!

  14. AMD Fusion details leaked: 40/32 nm, dual-core CPU, RV800 graphics

    Taipei (Taiwan) – AMD pushed Fusion as one of the main reasons to justify its acquisition of ATI. Since then, AMD’s finances have changed colors and are now deep in the red, the top management has changed, and Fusion still isn’t anything AMD wants to discuss in detail. But there are always “industry sources” and these sources have told us that Fusion is likely to be introduced as a half-node chip.

    t appears that AMD’s engineers in Dresden, Markham and Sunnyvale have been making lots of trips to little island of Formosa lately - the home of contract manufacturer TSMC, which will be producing Fusion CPUs. Our sources indicated that both companies are quite busy laying out the productions scenarios of AMD’s first CPU+GPU chip.

    The first Fusion processor is code-named Shrike, which will, if our sources are right, consist of a dual-core Phenom CPU and an ATI RV800 GPU core. This news is actually a big surprise, as Shrike was originally rumored to debut as a combination of a dual-core Kuma CPU and a RV710-based graphics unit. A few more quarters of development time gave AMD time to continue working on a low-end RV800-based core to be integrated with Fusion. RV800 chips will be DirectX 10.1 compliant and are expected to deliver a bit more than just a 55 nm-40 nm dieshrink.

    While Shrike will debut as a 40 nm chip, the processor is scheduled to transition to 32 nm at the beginning of 2010 - not much later than Intel will introduce 32 nm - and serve as a stop-gap before the next-gen CPU core, code-named "Bulldozer" arrives. The Bulldozer-based chip, code-named “Falcon”, will debut with TSMC's 32nm SOI process, instead of the originally planned 45 nm.

    As Fusion is shaping up right, we should expect the chip be become the first half-node CPU (between 45 and 32 nm) in a very long time.


  15. Intel teases new Larrabee details

    Santa Clara (CA) – Siggraph is just around the corner, so it should not be too surprising that Intel is talking more seriously about Larrabee, a discrete graphics product due for launch in “2009 or 2010”. Intel decided to provide a few more slices of information that are likely to fuel a new round of rumors on the Internet.

    Intel’s presentation to analysts and journalists held several interesting details, the design idea and high-level technology approach of Larrabee, but our two most burning questions were left unanswered and, at least partially, positioned in a territory that opens an opportunity of wide speculation: How many cores will Larrabee have and how will those cores compare to discrete graphics offerings from Nvidia and AMD/ATI? We don’t know for sure, but we received some hints.

    According to Intel, the idea of Larrabee was born out of a need of CPU programmability and GPU multi-parallelism. While Intel promises that Larrabee, which will be based on a many-core x86 design, will provide “full support of current graphics APIs”, the company said that it will offer developers a clean canvas to develop new APIs for new features. The hope here is that game developers will take advantage of x86 coding to come up with unique features that cannot run on GPUs.

    Intel has developed a 1024 bits-wide bi-directional ring network (512 bits in each direction) for Larrabee to enable agents to communicate with each other in low latency manner resulting in what the company describes as “super fast communication between cores”.

    As previously reported, Larrabee x86 cores (each Larrabee core is actually a full x86 core) are based on a modernized dual-issue Pentium design with a short execution pipeline. The chip design was enhanced with a vector processing unit (VPU; 16 32-bit ops per clock), multi-threading (4x with separate register sets per thread), 64-bit extensions and sophisticated pre-fetching.

    So, how many cores will this many-core product have? Intel says this is still a secret. The presentation charts however, which we were not allowed to publish, talk about Larrabee examples with 8 to 48 cores. These numbers are in the range of rumors we heard so far and it would not surprise us, if an 8-core chip in fact would be the entry-level product of this “2009 or 2010” product. Intel often said that Larrabee is “highly scalable,” so 48 should be possible. Count in Hyperthreading and the products talked about can deal with 32 to 192 threads simultaneously.

    Performance of Larrabee is a “secret” as well, as is the answer to the question how many Larrabee cores Intel will need to match Nvidia’s or AMD’s GPUs. But we would hope that Intel would not debut a product as important for the company as Larrabee with a performance that is significantly inferior to what is available on the market at the time of launch.

    Scalability may become one of Larrabee’s biggest assets. Intel claims that Larrabee cores can scale almost linearly in games such as Gears of War, FEAR or Half-Life 2, Episode 2. 16 cores will provide twice the performance of 8 cores, 24 cores three times the speed, 32 times four times, etc. “Almost linearly” translates to “linearly within 10%”, Intel said.

    It is interesting to note that Intel mentioned that Larrabee will “fully support IEEE standards for single and double precision floating-point arithmetic.” AMD’s and Nvidia’s GPGPUs support double-precision processing as well, but typically suffer dramatic hits in performance when exposed to double-precision apps. For example, Nvidia told us that the firm’s latest Tesla cards theoretically can hit 900 GFlops to 1 TFlops in single-precisions but just about 100 GFlops in double-precisions. Intel did not say how Larrabee performance is affected in double-precision environments.


  16. shahed26, DRM, haven't tested the WMP without it, I set it dependable because it comes in WMP installer usually.

    TV Tuner should be needed for tuners regardless of the app, try it but I doubt it will work.

    Thanks nuhi. Will do a quick test and see how things go.

    Thank you

  17. WOW!!! Just went through the changelog, and i see "new 'Digital Rights Management (DRM)"

    Finally i can make my vista fly, without these DRM craps

    Thanks so much nuhi, your simple AWESOME MAN!!! :thumbup


    2 quick questions:

    If i remove 'Digital Rights Management DRM (its says needed for WPM11) will i still be able to use WPM11 to play unprotected movies and musics?

    If i remove tv tuner support, (it says needed for Media center) will i still be able to use 3rd party tv software to watch tv?

  • Create New...