Jump to content


  • Posts

  • Joined

  • Last visited

  • Donations


Posts posted by shahed26

  1. Ah ok, I suspected it might have something to do with that. The only other one I see w/ SP1 integrated already (02-28-2008) has all the versions in it. All I want is Ultimate. vLite can remove the other versions right?

    Yes, vLite can remove other versions.

  2. I just get this one. I'm also unable to get the 'Uninstall a program' windows. when i click on it nothing happens. :(

    Did you install tuneup utilties 2008? If you did, then go to "solve problems>tuneup repair wizard" There you will have all the display problems fix

    which options in tune wizard can fix this problem ?

    Am not sure which options fix it. I just check all the stuffs in "tuneup utilties 2008>solve problems>tuneup repair wizard, and then check all items and apply and restart.

    btw if you did patch your uxtheme.dll's DO NOT PATCH SysWOW64 "DLL" FILES (only if your on x64 OS)

  3. could it be your shell32.dll file? (not sure if its the same for vista, i dont use it)

    have either of u tried to mod it? maybe try running modifype on it to make windows think its the original file

    that file is resposible for alot of windows actions

    maybe try explorer.exe also?

    No, did not mod any vista files. Installed vista untouched version. This seems to happen when i associate files types. I use acdsee pro and restorator 2007, and they both prompt to handle many files types as default by their program, and i choose "yes", and after using vista for couple of days, this "page error" occurs..

  4. This problem is driving me gradually insane...

    Every attempt i've made so far at following yr excellent tutorial has resulted in files being modified but no additions being made.

    In other words, if i try to add anything to install.wim, such as themes of sidebar gadgets, then no changes are made. When i install and check those folders - they are empty. However, any chages i've made to existing files, eg - user account .bmp files and logon screen, are ok.

    Any suggestions would be most welcome.

    Select "commit" Make sure you select (click on the) the image you mounted (eg ultimate), and then click unmount.
  5. Hi

    Am trying to add ONLY a context menu. For example i want to have this as the name "tools" eg

    Tools>addon maker, script editor, and so on

    Basically all my tools and utilities are located in system32/tools directory, and i want to access them via context menu (right click>tools) and also have 2 or 3 menu's inside the "tools" context menu, and have them pointed to my exe files which are located in system32/tools directory.

    How do i add this context menu(s) via inf?

    A basic inf template for this will do, as i do have a bit experience with inf files.

    Thank you

    (sorry if i posted this in the wrong section)

  6. Santa Clara (CA) – There is no doubt that the Core micro-architecture will have a special place in Intel’s history. Core reclaimed the performance crown from AMD, it put Intel on the power-efficiency track and it brought the first quad-core processors in a portfolio that has delivered more than 200 different processor models over the course of two years. While the first-generation Core architecture will see its first successors in Core i7 (Nehalem) CPUs later this year and may slowly begin phase out in the second half of next year, Intel today showed once more the potential of Core with the first 6-core processor – which is not only evidence of the scalability, but also indicates the limits of this architecture.

    Leaving aside the fact that Core was the main product that brought down AMD to its knees and its obvious dramatic consequences, the basic aspects of this micro-architecture have been fascinating. From a simple strategic view, Core transformed Intel’s processor business from a desktop-centric development into what is essentially a mobile processor environment. Pretty much every processor (with the exception of the Xeon 7100, Itanium and some Celeron processors) has an energy-efficiency focus and was originally thought out as a mobile CPU. Power consumption levels have been cut dramatically – to 65 watt at Core 2 Duo’s introduction, down from 100 - 130 watts of the Pentium D series.

    Two years ago, we all wondered: How far can Core scale? Today we know that at least in clock speed, there was not much progress – at least as far previous generations are concerned. We heard that Core 2 Duos were running relatively stable at 4-4.5 GHz in Intel’s labs, but the company never saw a reason to go above 3.5 GHz – up from 2.66 GHz at introduction. However, we saw progress in terms of the number of cores. In late 2006, Intel launched the Kentsfield multi-die quad-core processor and now the Dunnington multi-die six-core chip. They may not be as sophisticated as AMD’s single-die quad-core CPUs, but they certainly leverage lots of manufacturing flexibility and enable the company to have achieved the 6-core mark first. AMD plans to release the single-die 6-core “Istanbul” processor in H2 2009.

    Dunnington arrives in seven flavors, four and six cores, three different clock speed levels, six rack-optimized models and four different thermal design power (TDP) specs. Considering the fact that all processors are running at more than 2 GHz, the TDPs are impressive. But there are signs that the limits of Core may be in reached in the not too distant future.

    Besides the fact that you will need Intel’s data and pricing sheet to figure out which of the processors has how many cores and what power consumption and the notion that these are Intel’s most expensive processors right behind the Itanium mainframe CPUs and a 32-socket system can cost you $87,000 just in Xeon processors (but will give you 128 processing cores and a whopping 512 MB in L3 cache in exchange, we got stuck a bit on the “max” power rating, which is substantially above the TDP, which Intel recommends to system builders as a design guideline for servers.

    According to Intel the max power rating is a more theoretical value that means that those processors can consume that much power - 170 watts in terms of the X7460 – but only under special scenarios, such as synthetic benchmarks. While Intel concedes that the TDP does not indicate the maximum power a processor can draw, the TDP is based on observations under “various high power applications”. However, a “worst case real world application”, which can hit or exceed the TDP is possible, according to Intel, and may trigger the chip’s Thermal Control Circuit (TCC) when the processor is running in a “worst-case thermal condition” (at a case temperature at or near 64 degrees Celsius). In such a scenario, the operating frequency and input voltage will be reduced to cool down the chip. Intel says that, in such a worst case scenario, there will only be brief “brief instances of TCC activation [which] are not expected to impact the performance of the processor.” (More details: Xeon MP 7400 datasheet, 3.1 MB PDF download)

    Due to the relatively high power rating, the X7460 is only recommended to be used in 2U systems, while all other 7400-series processors can ruin in 1U rack-optimized servers. Intel claims that a sustained power consumption of more than the indicated TDP is unlikely and systems should be designed around this rating as a result, but the company’s rating is certainly an average TDP and not a max TDP rating, which clearly shows that 2.66 GHz 6-core Core processors are hitting the limits of what products make sense and which not. Intel may be able to scale Dunnington a bit higher, but we don’t expect substantially higher clock rates – and the company does not have to: By the time AMD’s first 6-core arrives, Intel will have Nehalem in place and have more flexibility and room for Dunnington’s successors.

    It is unusual for Intel to provide max power consumption ratings for its products. For example, the company’s Netburst-based Xeon MP processors with Tulsa core are listed with the generic TDP rating of 150 watts for CPUs above 3.0 GHz and 95 watts below. IT buyers interested in Intel’s latest processors should have a close look at the company’s new Dunnington processors, which are platform compatible with the 7300-series (Tigerton core). Compared to the 7300-series some quad-core Dunnington processors consume less and others more power (according to the TDP rating) at the same clock speeds, while costing the same money. The 6-core CPUs may be the most interesting reason to look at the lineup, but the privilege of running the first 6-cores will cost $2301 or $2729 per CPU. From a power perspective, certainly the most impressive product in this family is the L7455, which runs six cores and 12 MB of L3 cache at 2.13 at a TDP of 65 watts – and a max power of 85 watts.

    The progress is obvious if you take yourself back in time for two years and imagine running six cores within 65 watts. Fascinating.


  7. Even though i HATE nvidia video cards, because off their awefull drivers (had many bad experience with their drivers), i really want nvidia to get out of this mess, and stay in business. Without nvidia ATI will rule, and we will hardly see anything new in the market, because there will be no competition.

  8. Hi shahed26,

    I have tried your batch script method to no avail. I have verified the files have copied correctly and are in the right place.

    ProgDVB 5 reports that it 'Could not CoCreateInstance CLSID_MPEG2Demultiplexer', and ProgDVB4 reports 'Run DBA graph error'. My manufacturers own software is a little more successful, in that it can get to the run and get to the scan channels screen. However once I scan my local transmitter, it fails to see any channels. It looks like it is failing to lock, as the signal quality bar stays at 0%, despite it being 100% consistantly in XP.

    I have also ensured the BDA drivers are installed for my device, and ensured DirectX 9 is running ok.

    Source: Vista x64

    Destination: WinServer 2008 x64

    Card: Lifeview FlyDVB-T (PCI)

    Appreciate any help you can provide!

    Quick Edit:

    Ok, I read some old instructions and decided to run the install option of BDA.inf in /windows/inf/. After restarting, my LifeView application and ProgDVB4 (with graph edits) seem to work, although they are unstable. ProgDVB5 still gives the old error though.

    Guess I will need a better 3rd party app!

    Use Dvbviewer. Its the best one, (a little confusing when it comes to configuring the app) But all features work great on server

  9. http://money.cnn.com/news/newsfeeds/articl...ire/0432343.htm

    Apparently nVidia may actually have to pay for their failures, if a court agrees with the plaintiffs. Bias indeed.

    LOL... :yes: They are paying for it now :lol:

    A lawsuit filed in a California court on Tuesday alleged Nvidia violated U.S. securities laws and concealed the existence of a serious defect in its graphics-chip line for at least eight months "in a series of false and misleading statements made to the investing public."

    The lawsuit charged that Nvidia CEO Jen-Hsun Huang and CFO Marvin Burkett knew as early as November 2007 about a flaw that exists in the packaging used with some of the company's graphics chips that caused them to fail at unusually high rates. Nvidia did not immediately reply to an e-mail request for comment on the lawsuit.

    Nvidia publicly acknowledged the flaw on July 2, when it announced plans to take a one-time charge of up to US$200 million to cover warranty costs related to the problem. That announcement caused Nvidia's stock price to fall by 31 percent to $12.98 and reduced the company's market capitalization by $3 billion, the lawsuit said.

    More details


  10. Hi everyone,

    I was just on my computer, running 2 VM's a few apps and a video and whilst changing the Screen resolution in a VM and switching to full screen i got a nice blue screen with Nvlddmkm.sys named as the culprit. :realmad: I am so fed up with NVidia's crappy drivers. What should i do with this? Should i report it to anyone? If anyone can suggest who i can moan to about this i will, because i dont know about you guys but when i spend over a £100 on a graphics card alone i expect the drivers to have been written properly!


    LOL... not surprised to hear that, nvidia drivers are crap, they have wicked hardware, but they can't write good drivers at all.

    check this out


  11. Then the GeForce 4 Ti4x00 came out, and it was supposed to be oh-so-great, so I spend a fair amount of money on it, and it turns out it sucked. The drivers weren't too great, the video input (it was a VIVO card) was crippled with Macrovision detection (unless you used ancient drivers), the video output (s-video) quality was pretty awful, and you *HAD* to reboot to change which display is your primary monitor (so things like the video overlay work, so you can play stuff on your TV -- think HTPC). That was perhaps the most problematic/most deceiving card I've ever owned.

    Thats what i mean guys, their drivers are very bad, but their hardware features are good, too bad NVIDIA can't write good drivers, even now with their new cards as well. No point having a top video card, if there are no good drivers, and thats been the case with NVIDIA.

    If your after a video card with excellent all round features and also features that actually work, then go for ATI 3800 OR 4800 Series. AVOID nvidia!!

    (unless your a pure GAMER, then nvdia is a good choice, NOTE: ATI's recent 4800 series card can outperform nvidia's new card almost in every benchmark)

  12. San Jose (CA) - Believe it or not, NVISION 08 is not just about Nvidia. Earlier today we met with Gigabyte to see what we can expect from the Taiwanese manufacturer – and got a glimpse at an upcoming motherboard for Intel’s Core i7 processors with Nehalem core. The board, called Extreme Edition, sets several highlights, including the ability to transform your PC in a true deskside supercomputer that offers the processing horsepower of thousands of processors ten years ago.

    The prototype board on display was based on Intel's X58 chipset and supports up to six graphics cards, four PCIe Gen2 x16 slots and two wide-open Gen 2 x4 slots. Due to space constraints there is only Crossfire and no SLI support. So, what can you do with six graphics cards – for example six Radeon 4850 or six Nvidia 9800 GT models?

    You could run up to 12 monitors, which should be a dream for any flight simulator enthusiast. While you can run up to four cards in Crossfire (graphics) mode, you can employ all six cards for GPGPU applications and floating point acceleration. The theoretical performance potential of such an environment would be in the 6 TFlops neighborhood for single-precision applications (double precision will cause the performance to drop by 80 – 90%.) To put this performance into perspective, consider the fact that Intel’s 1997 Pentium Pro supercomputer with 10,000 CPUs was rated at 1 TFlops.

    The actual performance advantage of supercomputers is not entirely based on pure processing horsepower, but also memory capacity, which GPUs cannot match. But the simple thought that you can add six graphics cards with 4800 processors for about $1200 to rival the performance of supercomputers that cost billions of dollars a decade ago (at least in some applications) is stunning.

    On the power side, Gigabyte’s engineers developed a separated 12-phase power supply for the CPU; a 2-Phase structure is in place for the memory and a separate 2-Phase regulation for the PCI Express slots. The company indicated that there will be room for overclocking and special attention was paid to ensure “workstation-class stability under any conditions”.

    The board can support up to 24 GB DDR3-1333, or 6 GB of DDR3-1900/2000 memory (using overclocked 2 GB DIMMs). Thanks to the 2-Phase regulation, there should be enough juice to hold future 4 GB DDR3-1333 modules.

    The board is still being worked on and the prototype will undergo significant modifications in the storage and cooling department, we were told. The combined air/water-cooling block will be modified so that the six graphics cards can fit. All six SATA ports will be rotated to support extend-length PCIe cards.


  13. @shahed26

    i read that NVIDIA is recommended for use with Maya a 3D program, so its not only games its good in i guess... or maybe for softwares, NVIDIA works better with Maya only? i think NVIDIA is better in OpenGL?

    ATI and NVIDIA's recent cards are both good. But i wouldnt buy a NVIDIA card just because it works better for only a particular software, i would go for that works well with everything i throw at it, and ATI is IMO better because of their excellent drivers. Open GL maybe is better with NVIDIA, but ATI is not bad either.

    ATI had rough times in the past where NVIDIA dominated, but thats all changed now. ATI all the way for me!!

  14. ATI is the best in almost everything EXCEPT for games. Am not a gamer, but i have been using ATI cards for years. Their image quality and features are the best compare to NVIDIA. Nvidia are better when it comes to gaming.

    Nvidia is also always sort of over priced, compared to ATI.

    Nvidia's drivers are poor compared to ATI. ATI write very good drivers, while Nvidia's drivers are so much more optimized for games, thats why ATI can hardly beat NVIDIA is gaming arena. But thats changed with their 4800 series cards

    When i look at a Video card, i look at more video decoding/encoding and other features, and ATI has all these stuffs better than nvidia. Nvidia also has good video and other features, but their driver coding lets them down.

    SLI/CROSSFIRE are just over hyped. They do not give you the boost in games you expect, only a very little speed gain is what you get, and you end up paying for two cards, and also EXTRA power consumtion or even most people end up buying a 800W PSU that costs alot unless you go for these cheap ones.

    So overall ATI is much better, unless your a gamer then nvidia. ATI are really back in business with their new 4800 series cards, and even forced Nvidia to drop their ridiculous price of GTX200 to match ATI's 4800 series card, which perfoms much better in most gaming benchamrks when compared to GTX200.

    My vote goes for ATI


    GPU Vote for 4800

  15. BDA (TV Tuner) support for Server 2008 x86 and x64


    I compiled a setup file, that will add BDA (TV Tuner) function on Server 2008 x86 and x64, without any hassles.

    I tested it on a clean installation several times, no issues at all

    Hi .. can I get this to work in XP Professional 2002, Service Pack 3?

    I have run into a major problem getting my TV tuner card to work in XP Pro, have posted about it in the Windows XP section, repeating it here in the hope that someone will be able to advise what I can do to fix this.



    The problem is that my filter graph to control the tuner card works great in XP MCE 2005, does not work in XP Pro - BDA network provider filter cannot tune to signal - signal strength is always 0.

    If anyone is interested and/or knows what the fix is, please check the detailed post at:


    Thanks in advance.

    This is ONLY FOR SERVER 2008. Windows XP already has BDA function. No need for this setup pack. Regarding your problem, You can try installing the latest Directx 9 redist, to see if that solves your problem.

  16. Analyst Opinion - This week I have been spending some time with AMD listening to an update of their workstation and server roadmap. AMD’s message: We are healthy and we are executing once again. However, they admitted that their misses in 2007 hurt the company a great deal in revenue, profitability and - even more importantly - credibility.

    These events are similar to a pep talk before a big game. They are long on promise and hope, but may not reflect what will happen when the teams hit the field. For much of what they are talking about, the game is still months off so take that into consideration as you read this. In effect, I am in the AMD locker room and only telling their side of the story today. Someone standing in the Intel locker room at IDF (Intel Developer’s Conference) next week will undoubtedly get a different picture and both will probably not reflect what happens in the real game.

    AMD’s OEM fan club

    For AMD, one of the things that has always worked for them, at least while they are much smaller than Intel, is that the OEMs (and this applies to PCs, servers, and workstations) want them in the market, because they don’t like the idea of not being able to bid companies against each other. In addition, when you have a single vendor and that vendor has a problem executing, it takes out the entire eco-system and these OEMs don’t like that kind of risk.

    Typically, the dominant vendor has the advantage, because the industry standardizes on them, but because of this single vendor fear, AMD gets a significant boost, if they can either meet or beat Intel values. Currently, the industry is struggling with a change in how you measure the server side of this market shifting from pure performance metrics to those that factor in energy efficiency. AMD believes they have a significant advantage once you factor in energy.


    Shanghai replaces Barcelona later this year and, on paper, it looks impressive. AMD is promising better performance, lower idle power consumption, improved/pricing for performance and minimal changes over Barcelona. These are significant improvements and AMD argues that this new Shanghai part will significantly exceed Intel’s Harpertown in performance. We will have to wait for independent benchmarks to confirm whether this is the case and Intel clearly isn’t going to be standing still. Their roadmap will be presented at the Intel Developer Forum next week.

    Shanghai won’t really be aggressively moving into markets until the first quarter of next year, so the real competitive results, in terms of sales, probably won’t be known until AMD and Intel report sales numbers at the end of the first quarter in 2009.

    One of the reasons for the delay between launch and market ramp is the validation process that has to occur when any new part is brought to market. While Shanghai is similar to Barcelona, it has enough differences to still require a validation test. A big problem with Shanghai now is that it is both substantially better than Barcelona and coming relatively soon, which could cause buyers to defer server and workstation purchases until the CPU becomes available. However, one good part of a bad market (and economic conditions in general are bad), is that funds available this year may not be available next year and funds for purchases may be in the “use it or lose it” category, which should offset this tendency to defer somewhat.

    Microsoft’s role in AMD’s future

    One of the advantages that AMD has over Intel is they don’t seem to bump heads with Microsoft very much. Intel and Microsoft are both dominant in their respective segments and neither likes to give up leadership to the other. As a result, Intel tends to be very aggressive on Linux and Microsoft tends to support AMD more aggressively in response. Microsoft apparently is tuning its virtualization offerings, which have actually started to sell impressively well and really stunned VMware, the firm that really put virtualization on the map.

    What is often kind of funny to watch is how Microsoft and Intel do this dance. It’s almost like watching a bad marriage where everyone knows the couple is living separately, fights a lot and have new partners. But the couple publically acts like they are still close. In the case of Linux for Intel and AMD for Microsoft I often wonder, how this would pan out if these companies were people.

    This means that while Microsoft does favor AMD, their public posture doesn’t fully represent just how close the two companies actually are. And for servers in particular, this should show some strong additional benefits for AMD’s offerings as the next generation of products from both AMD and Microsoft roll to market.

    Industry problems

    One of the things that both AMD and Intel have to deal with is that the future is uncertain. Currently both Intel and AMD are chasing, particularly in the server space, the idea of massive multi-core roadmaps, which assume the future will be able to use these new systems. While Microsoft has shifted to a per socket pricing model, other firms like Oracle still price their software on a per-core basis, which works against this multi-core trend as the software cost negates much of the hardware advantage. In addition, programming for massive multi-core systems has been incredibly difficult and most existing applications have trouble scaling to more than two or three cores let alone the 16 to 32 that are clearly coming.

    Consolidation - combining many servers into fewer servers with more cores - and virtualization (which allows a single server to act as if it were a large number of servers) clearly help, but these shifts are still in their infancy and the market often shifts between models suggesting the possibility that the current massively multi-core roadmap may not accurately reflect the real future. And this idea of cloud computing, which shifts the market from a focus on selling hardware to selling services, has massive implications for this segment.

    Wrapping up

    AMD feels like they have their act together now and their recent competitive successes in the graphics segment appears to confirm this.

    Barcelona is in the market, finally, and Shanghai, if it arrives as promised, should help confirm that AMD is on track again. Granted, until it actually ships, this is more promise than reality, which adds a cautionary element. Shanghai, everything else aside, will be the most valuable product to AMD in terms of restoring the company’s credibility if it ships in Q4. Intel clearly isn’t giving any ground and we’ll likely see next week that Intel has come to the court to play as well. This should be an interesting second half in the processor wars, at least for servers and workstations.


  17. Santa Clara (CA) – Intel said it has developed a new line of motherboards that will enable PCs to retain certain functionality, even when they are in a power saving sleep mode and consume potentially less than 20% the power they normally would.

    The new motherboards of the DG45FC and DG45ID series are planned to begin shipping next month and will come with so-called remote wake technology. It is not an entirely new feature, as it has been previously been available on Intel’s enterprise-focused vPro platforms (introduced in August 2007) with slightly different functionality, which allowed system administrators to carry out repairs or updates remotely on PCs within their organizations.

    For consumer PCs, remote wake will allow PCs to automatically wake up to run certain tasks, such as answering a phone call and media downloads. Among the first companies to support Intel’s new hardware is VoIP provider Jajah, Cyberlink, Orb and Pando Networks.

    Intel said that the technology will be made available with four motherboards for desktop computers initially. At this time, the feature is only available through Ethernet connection, which means that if you want to use remote wake, Wi-Fi users will have to connect their PC to their Wi-Fi router with an Ethernet cable.

    The message behind this remote wake technology is, you guessed it, a green one. If you don’t work on your PC and its only task may be to wait for phone calls, a typical consumption of at least 60 watts or triple-digit numbers in higher-performance PCs are clearly a waste of energy. However, if the PC is powered down into sleep state, the consumption can drop to about 10 watts in S3 sleep mode. That should not only have an impact on your power bill, but should make you feel much better about your environmental efforts as well.


  • Create New...