Jump to content

CoffeeFiend

Patron
  • Posts

    4,973
  • Joined

  • Last visited

  • Donations

    0.00 USD 
  • Country

    Canada

Everything posted by CoffeeFiend

  1. It's easy. But I'm not sure what exactly you want to do. You're posing in a web dev section (suggesting ASP.NET apps which run on a web server), and then asking how it's done in Vista (which has nothing to do with the previous part). I'll assume it was an ASP.NET question. I'm not going to bother explaining it all in detail as there's already countless articles doing so. Try MS' own QuickStarts tutorial here. Everything is thoroughly documented (MSDN2), pretty much all community sites have related tutorials too, google can find more results than you'll ever need, there's MS' newsgroups, etc. You can use the user profiles to store language preferences, URL rewriting (using a HTTP module e.g. www.yoursite.com/en-US/whatever), or even the browser Accept-Language.
  2. But it'll cost 200$ for no reason, and since it doesn't start as a service (unless they've changed something in v6), so every time someone logs in (he said it's a family PC with many user accounts) they would have to restart the virtual machines manually, and stopping them when they're done (not running when logged off), which would be extremely inconvenient if you plan on using the other OS instance for anything at all (always getting kicked off by people logging on and off). And that's assuming all users have access (NTFS permissions and all) to do it, and remember to do it too. And the constant start/stop of the app and VMs wastes a lot of time, quite annoying for everyone logging in to wait a few minutes before they can do anything. This is the type of situation VMWare Server is meant for (workstation just doesn't work so well here) - VMs running without interruption (often for server processes -- and when you think of it, what he's asking for is somewhat like a "mini terminal server" really), without the need for a user to manually start them. VMWare Workstation works better for testing stuff, single users working with their own VMs while they're logged on (like for using apps that require another OS) and such things. Besides, even if both apps worked equally well (it's not the case, but anyways), I see no real justification to replace a free app with one that cost 200$ without any real benefits. Even VPC (v2007 betas are available too) and Virtual Server are both free now, and there are other cheap alternatives like Parallels Workstation (75% cheaper than VMWare Workstation). I see no reason to shell out 200$ unless one needs specific features it has to offer.
  3. I fail to see the relevance of VNC here, it's a remote control app and nothing more... Perhaps you thought of something special, but if that's the case then you're extremely short on details to say the least. His problem isn't with finding an app to remotely control other PCs/OS'es but a virtualization setup. As for the original post: 1. It's not a virtual OS. An OS is an OS, it's not virtual. Only the host (computer) is virtual. Is it possible? Absolutely - I do it everyday. 2. We'll assume you don't want to connect to windows - or not the instance that's running on the bare metal (real hardware) at least, since you're looking for other ways than what computerMan mentioned. VMWare running all the time? Easy: don't use VMWare Workstation! Use VMWare Server instead, which will start as a service and in turn start the virtual machines you've configured to auto-start when it boots. It's a better virtualization product IMO (depends on your use for virtualization products I guess), and it's 200$ cheaper (it's free!) Then you connect to your (already running) guest OS'es (running on the virtual hardware inside VMWare) the way you normally would (MSTSC, VNC, Citrix, FreeNX, SSH or whatever you want/need/like and depending of the OS). You must setup the network properly first (e.g. use NAT and forward the required ports, or use spare/dedicated NICs for the guest OS'es) The only real advice I have is: 1) Read the VMWare networking documentation/tutorials (see how to configure it i.e. editing config files), and find out what ports you need to control remotely 2) Buy more RAM! You will need LOTS of it if you want decent performance.
  4. Is it garbage? No, it's much WORSE than that. Saying it's garbage is too much of an understatement, so doesn't describe it accurately enough. I cannot find a word strong enough to express how bad it truly is. Lots of us could go on very long rants about it: how it sucks, how it makes a PC slower than a dead snail hibernating, how buggy norton stuff is (we've had sooooo much trouble caused by SAV!), how it consistently keeps getting slower and more bloated every year, how the registration process is getting increasingly and unnecessarily troublesome (very long serials, online activation, etc), how dumbed down it is, how sub-par the firewall and AV are (and how much better even free apps often are), how easily it gets "pwned" ("start keylogger" anyone? or using WMI in a vbscript to disable the services, etc), how their support (even paid support) sucks, how intrusive it is, how hard it is to uninstall it (there's an app just for that!) and how it usually breaks things, how much of a waste of money it is (definitions/renewals cost money too - not just initial purchase), etc. I would rate it a solid 0 out of 10, and that's already being generous. Their days of making useful apps are long gone (like the original norton utilities). Some of the older versions of some apps are still somewhat useful (e.g. ghost 8.2) but are usually surpassed by the alternatives (i.e. acronis trueimage). I'm sure they'll manage to make all their new acquisitions (like veritas' products) suck beyond belief too - it's just a matter of time.
  5. They can be used for tons of different purposes: NLB (and more aggregate BW), redundancy, routing and NAT (like sharing an internet connection), bridging networks, they can be used in vmware by guest OSes... You name it! And that's just plain old wired ethernet adapters. There's lots of other types of network adapters, from wireless, modems, firewire, DVB cards, non-ethernet NICs (e.g. token ring), non-UTP ethernet NICs (e.g. fiber), more "specialized" network interfaces (e.g. HPNA), etc. If you count those, there's even more uses. That's much like asking what a 2nd car could be used for...
  6. One vote for "other": homebrew (and NOT using php!) There's a lot of popular ones not on the list e.g. coppermine. I think "homebrew" should be an option too, as well as static pages (many apps can generate static pages + thumbnails, such as photoshop). Even "hosted" should be an option (to represent countless places like smugmug)
  7. You might be interested by some SSW apps, especially CodeAuditor. They've got a trial version. It's a bit on the expensive side IMO (390 AUD or ~300$ USD), but it's quite nice.
  8. I wouldn't call it a war personally, but sick of it? As of the 5th post (where the endless FUD, misinformation and pointless bashing started).
  9. A couple bytes here and there just doesn't add up to much. And no, data is not inherently aligned, that's patently false. Makes no sense in any way I'm reading it. Well, guess what? Most people don't take several years of university courses full-time, spending many thousands of $ to program trivial hobbyist apps/projects. Just like one doesn't take a full EE course to change batteries in a flashlight. You're criticizing universities over stuff that's mostly irrelevant (if not 100% wrong), but now you're even disregarding their main purpose: learning something, with the intent of earning a living. No it's not. You trash talked all open source software in one post, but that was the only reference to OSS in the whole thread. Either ways, there's no reason for universities to change their ways and teach inane stuff instead. OSS programmers (except the lone hobbyist on a tiny project with unlimited time [at the expense of having no life] and no money contraints) need much the same - in fact, a large part of those OSS programmers work in large teams, paid by corporations e.g. mozilla. The license or price changes absolutely nothing to the task at hand, nor the required knowledge.
  10. Providing you don't have a problem with stepping on the mouse's own wire when you move the mouse back (after reaching the end of the mouse pad), the wire sometimes pulling the mouse and falling behind the desk and such annoyances, why not? It sure annoys the hell out of me (can't say I'm a big fan of mice to begin with though) Batteries wise, yeah, I hear you. So many things using batteries nowadays (wireless mouse/keyboards, countless remotes: for the TV/home theater amp/DVD player/sat receivers/XBOX/sound card/PC speakers/MCE remote/X10/2 for AC/camera, kids toys, digital cameras and 3 flashes, laptop, mp3 players, X10 wireless devices, tons of kids toys, flashlights, smoke detectors, thermometers, various power tools, and so many more gadgets and things I'm forgetting), it quickly becomes something near a full time job to keep track of batteries (in pairs), charging them and all. Batteries everywhere. Countless power adapters. Lovely! Thankfully some of the newer mice are easier on the batteries (the one that came with a MS Wireless Optical Desktop 4000 lasts like 6 months, whereas the old Intellimouse Explorer was more like a month). Mind you, that's not the main reason not to go wireless though (for me at least). If you have enough wireless devices (several mice/keyboards and such) in that 27MHz band, eventually you run into troubles: nothing wants to work anymore (like WiFi and 2.4GHz wireless phones). I've had 2 wireless mice (on the same PC) fighting each other like that before... I've never seen one not be able to do 6ft range though (I've never tried 50ft, I'd have to walk out the house and look with binos to see if the cursor still moves) Either ways, I use wired trackballs instead.
  11. CoffeeFiend

    IRC Channel

    Besides, a channel for MSFN in general was tried a few times, and it wasn't successful AT ALL. Creating one for one specialized (and relatively simple) app, especially as lots of people are switching to Vista... I wouldn't expect it to be a big hit no matter which server it's on.
  12. Like Zxian just said, you essentially made my point! Saved all of 10 bytes (~nothing), and after all your data is properly aligned and all, you just might have a bigger binary. It will be the case with most programs (again, reference material mentions this too). It's easy to see too, compiling an app to use SIMD instruction sets usually results in larger binaries (but they WILL be faster). Besides, most sensible programs will do a feature check of the CPU, and use whatever is available for the math lib, which directly makes for larger binaries to start with. Similarly, 64 bit binaries are usually bigger, but also run faster. It's a total non-issue. And if you really care so much about code density, you shouldn't be using MMX anymore - SSE is denser (and faster). MMX is old stuff that's been replaced by better/newer techs. MMX isn't even supported anymore in some environments like Win64: ML64 (MASM) will give you a "error A2222: x87 and MMX instructions disallowed; legacy FP state not saved in Win64" error. SSE/SSE2 is where it's at nowadays. Denser, faster, handles more data at once and all (even though the binaries might be even bigger, data being 16 byte aligned with SSE2) And I wouldn't be so surprised schools don't teach your own totally bogus metrics. There's no standard "efficiency" measure of any type (you seem to confuse this with code density, which is unrelated, and has nothing to do with speed at all). And code speed isn't "inversely proportional to the size" - much the inverse. Compilers don't have options to compile for size *OR* speed for no reason. By your reasoning, they're both one and the same. Often, the larger code is faster (like 64 bit binaries, code that implements math libs that checks and uses whatever your CPU has to make it faster, binaries being larger due to aligned data and such). Speed is what really matters. Virtually nobody wants slow apps that save a couple bytes of disk space (even glocK_94 agreed on that!) Besides, schools don't have time to teach possibly everything. A lot of it comes with continued learning & experience. So they focus on what's more important. Things like: -they know/understand operating systems and such required stuff -they know/understand various things like databases, XML, threads, network programming, etc. -ensuring they have a firm grasp of concepts like OOP, MVC model, n-tier development, etc. -teaching them about software engineering, patterns, UML, algorithms, etc. -making sure they know a few different and useful languages: are competent with them, writing secure/stable/quality code, proper commenting, etc. -ensuring they understand the web-side of development as well ([x]html, css, js, server-side techs, etc) -project planning and estimates -documenting your stuff -profiling, load testing, effective debugging, etc. -application life cycle management -team development (using some SCM), unit tests, code coverage, continuous integration, etc. -some different/more advanced stuff in various courses (3D stuff, data warehousing & OLAP, etc) -problem solving in general, and understanding the process (user requirements, etc) -the basics of several fields like GUI design, usability, QA, deployment, etc. ... But like most schools, the primary intent is learning to learn (and learning by doing). Especially in programming and IT. New stuff comes at an incredible pace. It's very hard to keep up with everything (the sheer amount of new stuff that came out in the last year or so is totally insane!), and requires constant learning/retraining to stay current. The purpose is to make employable programmers, that will be able to pick up new stuff as they have to (inevitable), that will quickly grasp the specific task at hand, be productive in a team setting, working on projects of various sizes/complexity. The intent is not to ensure write some over-optimized trivial apps so they can run on a 286 disregarding cost/time (much less optimized for a couple bytes of dirt cheap disk space over performance!)
  13. All OSes != linux. If I just realized all OSes are by someone else's standards ALL bloat? Yes. ...much like your post with an identical logic. You lookup "democracy" - it has NOTHING to do with arguments or the lack of them. Yes, people can have opinions, but they can often be misguided, uninformed, and sometimes just plain wrong, so not always useful nor insightful. No, I haven't. Nowhere. Ever. You're the one suggesting things here. Uhmmm. No. I've fought against the massive misinformation, constant FUD and repeated "attacks" from exactly ONE person, until you joined in. I haven't really argued or anything against anyone else so far (be it jcarle, zxian, cyberjoe2, Tarun...) If you say so! (I didn't say that, and never directly reffered to members of any specific forum anywhere - let alone about any specific topic) n00bs about programming for the most part? Very likely. But then again, it's the same thing about most other topics (global economy, EE, bioengineering, etc), and I certainly fall into that said n00b group for most of those topics too, and my input would be rather worthless in many of these cases as well. Well, that's EXACTLY how you think! You claim it's not a democracy ("Why democracy since there are experts who know better?")... Mind you I don't even run Vista. (just check to see how active I am in those Vista threads, you'll see) Coming from someone saying "don't get agressive"? Quite "funny". When I check the thread, what I see before you started posting was objective, on-topic discussion/debate without any flaming. Check the thread really.
  14. So something is useless for people, unless they have a use for it (apps that require it)? So all OSes are really just pure bloat, unless someone must to benefit from running the programs they want. Cars aren't stupid, they're useless. They're just driveway clutter, unless you want to go places... Your friends must be rich and be able to afford that much more driveway space, because here everybody complains about the driveway clutter. I love your absolute lack of logic about any way I try to put this. Experts make for a RELEVANT discussion (unlike the one we're having right now). Uneducated n00bs just don't make for an insightful debate. And somehow, this just turned into commies or something. Go logic! Ah, still in absolute lack of logic mode eh? This isn't at all about me commenting, but rather about uninformed n00bs NOT commenting (on any subject really). Yet, my obvious point got changed into "haters should shutup" when you read it? And somehow, that makes one person (for stating the obvious) now an elitist. Logic 101... This was about having an insightful and intelligent debate/discussion, between folks who at least have rudimentary/basic knowledge of the subject at hand (communism, it seems). A group of experts in global economy not listening to the opinions of all the n00bs saying "they're stealing our jobs!" makes every single one of them absolute elitists and arrogant pricks too (and communist ones at that), who look down on every single individual out there. Somehow it wasn't all that hard to figure out you must've been a Vista (the best OS ever) basher too. You keep enjoying your computer stone age forever...
  15. IMO? Not AT ALL! Skype is hardly the only good one - I wouldn't quite say it doesn't belong on the list, but there's better out there IMO. A decent VoIP company (like I'm using), and if you want extra features, free international calling thru FreeWorldDialup and such, then look into TrixBox (to go along your ATA) - as long as whatever provider supports IAX (hopefully support good codec choices too). Skype sound quality? It's below almost any VoIP I've tried (my VoIP sound FAR better than a landline). VoIP doesn't have to suck, cost too much (like vonage), or have bad voice quality. As far as skype goes, I wouldn't save 5$/mo using them over my current provider, and the difference in quality/service/bandwidth usage/reliability and all is well worth it (hey, I can even call 1-800 #s and it actually works, unlike via skypeout)
  16. Yes, the defaults are pretty good, although some stuff is disabled by default for security reasons (understandably). As for IIS5, it did work IIRC, but I don't have a Win2k box around to test, so I can't tell for sure (we hardly ever deploy anything on IIS5 anymore, and we're definitely looking forward to v7!)
  17. Suggestion/hypotesis/possibility != statement (I didn't make any statements). I hardly have a high opinion of myself. I know enough to work in the field, but I'm not a guru by any means (you should see the great ones on newsgroups... Scott Guthrie, Rocky Lothka, Peter Bromberg, etc - tons of them, and really bright folks! Had some great conversations with 'em, and I loved reading many of their books) If you mean me saying people without basic rudimentary programming knowledge shouldn't bash things they don't understand at all (same on other topics), then sure, you can say "I'm looking up on them". And you're putting words in my mouth. I didn't say that. It seems to me like you have: Saying that people actually have that mentality, and this is how OS'es got this bloated (and now drivers too). Absolutely! Without any doubt at all. But the average end-user definitely doesn't, as they only notice GUI/theme changes, and hardly cares about anything else (like all the amazing underlying changes and important improvements).
  18. Absolutely, toally wrong - all of it! That's not the purpose of SSE and other instruction sets AT ALL! Binary size is a non-issue here, and replacing a few instructions with a couple less here and there won't make much of a change. And since data has to be properly aligned and such, it will usually result in a larger binary (but then again, who cares? it runs faster!) The point of it is to manipulate more data at once (in parallel) to make program execution MUCH faster. You won't find "to reduce executable size" in any docs about SSE or anywhere like that, because it wasn't the purpose of it whatsoever. In fact, you'll find lots of technical papers, books and such talking about how MMX/SSE/whatever actually increases it somewhat. A quick compile of most [not necessarily all] code with the right options should prove that easily.
  19. I know I'm going against what you asked in the first place, but I can't resist adding my 2 cents worth regardless, especially after having used it for quite a while. Even if you disregard skype's bandwidth usage (it's actually a P2P app, using your bandwidth for other ppl's calls), there's still issues: -there's many numbers it just CAN'T call, like most 1-800 #s I've tried, so relying on this alone there was at least a dozen common numbers I couldn't call at all. -requiring your friends to use skype too (unless you use skypein/out) -costs! Skypein is €30/year (if it's even available where you live - it isn't in Canada!), and the skypeout rates aren't always great (was like 2 cents/minute last time I bought some - counts even for local calls). Mind you they have a plan (30$/year soon), but it's quickly getting to be as expensive as my VoIP provider: 120$CDN/year flat, TOTALLY unlimited, including all possible features like voicemail (€15/year with skype) and all (and with *FAR* better voice quality, and no P2P bandwidth & CPU cycles hogging). -also requires your PC to be on constantly (better be a reliable system too) If you still opt to go the skype way (I bet you'll quickly change your mind if you ever do), then you can try this skype box (35$), which has worked quite well for me (I can use both skype and my voip line through it - not that I bother much with skype anymore)
  20. I don't know how you went from "random votes by anyone who likely knows nothing or very litlte about the subject not giving meaningful results" to "everyone but me's an id***". You're putting words in my mouth. Perhaps you're angry about which group you fall into or something, but that doesn't change a thing. Much like jcarle said, it's a complex topic, and if we opened a "global economy debate" thread, you know most people would just be saying "they're stealing our jobs!" and such - totally disregarding the big picture and real issues (so that'd be pretty pointless too). Again, putting words in my mouth. Where did I exactly say let's just waste disk space for no reason? I haven't. What I'm saying is that optimizing executables for the absolute tinyest size, disregarding any execution speed/resource usage and all (using olde compilers that can't even optimize for P3s, let alone anything newer or use SIMD instruction sets), is a waste of time at best. The disk space gain is pointless like you said, and lowering performance significantly is the process makes it a very bad idea. If executables need to be twice as big as they were 10 years ago, then so be it. Compilers aren't just making bigger executables for the sake of it. They're made by very intelligent folks, and if they made a compiler that makes bigger executables than the previous version, there must be a reason behind it (like speed gains). Yeah, it's not like because they have tons of new features, ship with far more content, or anything like that. They just decided to use more disk space for the sake of it. Right... Ah, the popular "they're stealing our jobs!" point of view. Rich frameworks full of functionality reusable by any app != bloat (that's an extremely limited/biased view). Totally disregarding everything else - even the "easily accessible" implications/consequences (like cost/time to market) that affect end-users a great deal... The bloat claims seem to be mainly from people who don't seem to fully understand what the .NET framework is and what it does. Besides, it's actually working against bloat, by having the most common code/functionality implemented once (in the framework) rather than each app having to include their own version/implementation of it (makes smaller prorgrams). The .NET framework 2.0 installer is like 20MB - I downloaded bigger stuff on a 28.8k modem on a P1 ten years ago, and it's a one-time deal (even available on windows update). Besides, there's no download required with Vista: it ships with it preinstalled. So people won't be able to complain about that anymore. It's no worse than having to download your video card drivers once, and I don't hear you saying video card manufacturers are making it a pain for users, and that unrelated 3rd partys should work on making smaller installers for it so we can stop complaining (disregarding that most video drivers nowadays are actually bigger).
  21. Alright, even if they all doubled, it would still make a very similar difference (1.xGB), hardly a big deal. That's precisely my point. Yet that accounts for a few TB of my data. Shrinking this by 1% (or even less) would make FAR more difference than magically shrinking all binaries down to 0 bytes. In other words, shrinking the binaries would be like worrying about spending 1.50$ on coffee when you're in debt by as much as the US national deficit: wrong order of priorities in the cost cutting, and too much worrying over what esentially amounts to nothing at all. Actually, there's a very noticeable increase in speed, especially if you're doing comparisons on recent hardware. Every new version has new optimizations for new stuff - especially for things like multiple-core CPUs, multithreading/auto-paralellization/SMP and such. The increases and improvements are definitely there, and the benefits are very significant. As for the size increases, I'd say you're wrong again. The basic size of a "hello world"-complexity app has grown by a few bytes (irrelevant to most), but given a more complex app compiled with the same optimizations it shouldn't make a huge difference - and again, I can't even see why anyone would worry about this nowadays. Absolutely! "Just" 10x faster is an amazing speed increase! If an app can cut down my time by 90% (of any task that consumes a fair amount of time), I'd buy a hard drive for it if I had to. Now 100x faster... I'd buy specialized hardware to run it if I had to, so 1024 times the size (1024 times ~nothing = barely enough to notice it)? Who cares? Who cares? That's what people are trying do to (using new compilers and what not). A totally irrelevant ratio to binary size? Who cares? Give me fast code (getting closer to "128x faster") and I'll be happy, and we're getting closer to that reality every day.
  22. That's 50% waste of a tiny executable's place, i.e. 50% of ~nothing. Now, if you manage to compress my several TBs of data by 50% (without zipping everything, lowering my AV stuff's quality by half and such), then you'll be able to claim 50% disk waste. Besides, your math analogy is broken, saying stuff being too big making it seem like half is one thing (even if absolutely untrue), but making it smaller would only make it appear as it's "real" size, not twice of it (2x smaller executables won't quadruple apparent HD size). One binary - or heck, ALL my binaries could grow by 200kb, and it wouldn't make much of a difference at all. It would add ~1GB to my current \windows directory tops - all of 30 cents worth of space wasted, big deal. Once you manage to shrink my photos, mpeg4 files, and all that by 50% (lossless), then perhaps you can speak about "doubling space"... The binaries here represent a small fraction of 1% regardless. They ARE getting FAR better indeed. Just not at making irrelevantly tiny apps, but rather FASTER apps (and supporting all kinds of new stuff) - which actually matters (and other ways). Again, the compilers make FAR faster code than your asm - if that's not proof enough... Hard disk storage is uber cheap. It will take serious optimizations, recompiling, thousands of hours wasted and what not before you save a single GB of space doing this, which again is worth all of 30 cents/GB, and always getting cheaper, hence becoming even less relevant by the day. It's just not worth it for most people. But CPU speed is NOWHERE near that cheap - it sure ain't 30 cents/100MHz flat! You can't just add more (extras) as you need it - at least not without throwing your existing CPU out. Core 2 Duos are too slow for some people's likings (some apps do a lot of number crunching, it's not uncommon AT ALL) - almost everybody nowadays encodes mpeg2 or 4 and mp3's, compresses and decompress large zip/rar files and such. And a very significant part does all kinds of other CPU-heavy tasks (virtualization, rendering, AV editing [NLE], CAD work, using/developing apps that put significant load on a DB like warehousing or load testing, photo work making panos/HDR/images with many layers/etc, and so much more things that really aren't uncommon at all). Application binary size has NEVER been an issue for any app I've ever used, but lack of CPU power frequently has been.
  23. You can't download batch files by default off IIS6, you have to give it a MIME type first (it won't just guess). Go in IIS admin, whatever site's properties, HTTP headers, MIME types, new, extension: "bat", MIME type either "application/batch" if you want people to download it (or users to run it locally straight from IE) or "text/plain" if you want people to see the batch's content displayed (as text) and copy/paste.
  24. Like I said, it's not any single batch/size. From pretty old stuff (as far as corporate PCs go) to the new-ish. Just not the very latest, as we're stopped buying that particular brand a little while ago.
  25. I guess it comes from mixed experiences. I've never had to RMA a single seagate drive EVER. And very few from maxtor/samsung/thosiba/hitachi/etc. Then there's the obvious ones I've seen LOTS of - not really the more known bad series like 5 platter "deathstars" - but things like the notoriously bad batches of fujitsus that resulted in a class action lawsuit (we're measuring by the full triwall of bad HDs here). And besides those bad batches, the only brand we've had significant problems with is WD. Be it by total # of bad drives or percentage (bad/total) or anything, they lose on all metrics. It's bad enough that we ensure that the new PCs we buy don't have WD drives in 'em (and we buy them by the pallet). So yeah, I don't trust my data to any WD drive, and will take ANYTHING over it. After seeing so many bad WDs, there's no way I could possibly trust them (it's not just one or two bad series like the fujitsu or "deathstars"). It's certainly not a few ppl on an forum that have seen 2 bad maxtors in a lifetime (hence they must suck!) or such that will make me change my mind. From anecdotal evidence, extremely low sample sizes making these statistically meaningless, when it's not actually the user's fault the drive died (cooling problem or such), and plain luck/bad luck (on a low scale), it's hard to attach much relevance to these. As for the best/most trusted brand, I picked seagate for lack of more comprehensive options ("almost anything"), but it's a good drive, good value, good performance and all. As for the totally untrusted one, I guess everybody can tell...
×
×
  • Create New...