Jump to content

CoffeeFiend

Patron
  • Posts

    4,973
  • Joined

  • Last visited

  • Donations

    0.00 USD 
  • Country

    Canada

Everything posted by CoffeeFiend

  1. I did think of that possibility. But that forces you to look for files in either sources that don't exist in the other (a bit of extra work yet again -- we just keep adding "an extra 5 lines here or there" with every special case to handle ). I thought that if he wanted to leave both original folders untouched, then it was simple enough to copy one to the "destination" directory ahead of time, then use the tool to merge the other into that. As far as logging goes, the sky's the limit. You could even output a CSV file with a list of each file present (or not) in both source folders, along with their hex/text versions, PE timestamp, MD5/SHA1 hashes, etc -- and of course which action was taken. Although that's perhaps closer to reporting. One could go a bit overboard with it and use log4net, with different "outputs" (like straight to screen or text files) and its different levels (and XML config file, etc). Or writing separate file lists to disk of what files were newer, unique, identical, etc. I was just planning on doing basic console output for now (and just for the "special" cases, no point in flooding it with a line for each file IMO). But I guess what really matters is what tomasz86 wants/needs. Indeed. There's always lots of little things like that to handle (like guard clauses in case source/destination folders don't exist) I was mainly worried that the formats might have changed between different versions of the same file but that's highly unlikely. Either ways, the Version class could still compare say, 4.10.1998 to 7.61.3456.7654 no problem if that ever happened. Yes. I've seen this when doing a quick check too. That's easily fixed.
  2. Great list dencorso I was thinking of it as a xcopy that only copies (overwrites destination) with the newer files, as in: If hex version of src file is newer then copy src file to dst folder; else If text version of src file is newer then copy src file to dst folder; else If both versions are the same and the PE timestamp of src file is newer then copy src file to dst folder else do nothing (there's no indication that it's newer) Please point out potential problems with this method. I will adapt it accordingly. Text file version shouldn't really be a problem. Unless you've got some where the same file (from 2 updates) which have very different formats. Parsing it seems fairly simple too: -truncate the text at the first space encountered (if any) to remove the part with "(operating system version here)" tacked on at the end -convert the said text to a Version object which will do the heavy lifting (parsing, comparing) for us
  3. You're just copying the one file there (where you have to manually pass in both full paths to a single file), that's still quite a long shot from being an "automated drop in replacement for xcopy with version checking" (where you pass in 2 paths and it does everything automatically). If you can do the whole thing in batch then more power to you. There's just no way I'm going to waste any time trying to write that (find all files incl subdirs, doing all the version checking including handling all the special cases like dencorso mentioned) using batch files (I avoid using batch as much as possible for my own sanity). Feel free to share a batchfile that automates the complete job, tomasz86 would surely appreciate. I just don't have that kind of time to waste.
  4. That's just yet another tool that reports the version, hardly an automatic "xcopy replacement with built-in version comparison".
  5. I can take care of that You'd just have to confirm the desired behavior. Like: -if the file in "source" doesn't exist in "destination" then just copy it anyway? -if the file in "source" doesn't have a "version" embedded in it (like say, a picture), then we copy it to "destination" only if the timestamp is newer? -any other particular "rules" you'd want it to obey? It's just those little things we have to figure out beforehand (requirements). Then it would only take a few minutes to write such a program.
  6. It depends on which version of the .NET framework one uses. The newer versions have some really nice and useful additions like LINQ, but those frameworks don't run on Win2k. It's still fairly simple to write using an older version of the .NET framework (v2). It's just more work to do so. There are a few things one would have to think of ahead though, like what to do when one or both files don't have an embedded "version" (just blindly overwrite? overwrite if timestamp is newer? just warn and copy nothing? ...) You can do just about anything. You could log to any file format (like lists of newer/older/unchanged/overwritten files and such), you could export a csv file with the differences and so on (you could include file hashes too). In fact, it's more work thinking about it (requirements, things that might occur and how to handle them, etc) than actually writing it. Seriously, it's not that much work. Getting the list of all files (including all subfolders) using Directory.GetFiles method takes a single line of code, getting a file version using FileVersionInfo.GetVersionInfo is another simple one-liner. Then you just have to iterate through the list of files using a plain "foreach" statement and comparing the versions (copying files as necessary, using the File.Copy method -- another trivial one-liner)
  7. If you intend to use this, you're in for a LOT of fun parsing the tables manually (well, writing code to do so)! Just a rough dump of the first few bytes of it on this board: 00 always 0 0204 SMBIOS version 2.4 24 wNumSMBStruc BC04 Structure Table Length 0x04BC 0000 end table 00 table type 00 18 its length (24 dec) 0000 its handle 01 BIOS Vendor, string1 02 BIOS Vension, string2 00E0 start address segment 03 Release Date, string3 0F rom size (1024KB) 809EC97F00000000 bios characteristics 33 characteristics ext1 05 characteristics ext2 FFFF bios release (sometimes filled) EC release (...) 417761726420536F66747761726520496E7465726E6174696F6E616C2C20496E632E00 "Award Software International, Inc." 46313300 "F13" the BIOS version is F13 30362F31392F3230303900 "06/19/2009" the BIOS date 00 end of the block next up, there's table 01 with manufacturer, product name, version, the UUID, wakeup type... then table 02 with much of the same (mfg, product, version, a dummy serial number) then table 03 with a few new interesting things like the asset tag (not filled here) then table 04 with the processor infos (socket type, mfg, family, processor ID, CPUID strings, cache infos, serial no, etc) ... and it just goes on. If you intend to use this directly, you'll have to devote countless hours, writing code to parse all the different tables, some of which are filled in strange ways (it changes quite a bit from a board to another), also depending on which version of SMBIOS it uses (tables change! read the new PDFs for those changes in your parser code! Lots of fun handling the unexpected!) Nevermind the infos you're looking for mostly aren't there. Most of what's left is RAM infos, port infos and so on. As for the CMOS RAM (where the BIOS settings are saved), you write the index of the byte you want to read or write to port 0x70 (mov al, somevalue then OUT 70h, al or similar), then you IN or OUT on port 0x71 to read or write the said byte (for the first bank, the other one can usually be relocated using a special register depending on the chipset used -- I've seen 0x74 for index and 0x76 for data before). Not that Windows will let you get away with such low-level hardware access these days, unless you fancy writing a kernel mode driver Good luck!
  8. Then you're doing it wrong. Even going by that one unique benchmark which by itself only means so much, the E8600 loses to the i7 2600k by almost 30%. If you check some other benches the E8600 loses quite badly in many real-world scenarios, from fps in games (twice the fps in L4D for example), to photoshop/acrobat processing times, encryption tasks being drastically faster. Video encoding, video editing and also rendering benefit are way faster. Even simple everyday tasks like creating a 7zip archive are 3x as fast (41s vs 122s)! Then again, for developing & compiling .NET apps with VS2010 a plain old Core 2 Duo with enough RAM is plenty fast (still happily using C2D's for that at work & home myself)
  9. Yet, nothing comes even close to it for movies. VLC is handy for a few other things (streaming/transcoding/capturing stuff, playing obscure formats & broken files, etc) but as a general purpose media player it's pretty awful IMO (sucks at playing H.264 files big time, hw acceleration isn't quite there yet, no EVR renderer support, really poor interface which usually happens with multi-platform programs, poor keyboard shortcuts, even mouse control sucks, tons of bugs like rebuilding the font cache for what seems like eternity all too often, etc). Mostly everything just works in MPC-HC, and ditto for ffdshow (knowing how to configure them sure helps a lot though). For audio files VLC's still horrible but I wouldn't use MPC HC for that either.
  10. Not that I'm very fond of MS' testing tools (I'd much rather use NUnit/MbUnit/xUnit or fluent versions thereof) but seemingly you have some programmers on hand. They should be able to handle this kind of work. It's pretty simple to transform XML into a more human-readable format using a simple XML Style Sheet. If they can't handle that, then there's some pre-made ones out there. And if that's still not easy enough, there's some open source automatic converters from .trx to .htm. There's no point in reinventing the wheel poorly by mangling the XML. Then again, other testing frameworks have a lot to offer over MSTest. I personally use NUnit (along with Resharper's test runner) most of the time but MbUnit/xUnit are also very nice. There's plenty of ways to check the results here too.
  11. It would be pretty easy to make a xcopy-like program that checks version using C# (only copies files with newer versions, or makes a list of them or whatever). Not sure if that's any help as you're seemingly using win2k...
  12. You might be able to parse the output of some command line tool but you're not going to detect this using a simple script (from scratch). You'd need to dig into the WDK (Windows Driver Kit) and have a pretty solid understanding of VC++ (and have Visual Studio handy), so you can use the DeviceIoControl API to send a IOCTL_ATA_PASS_THROUGH request, with the IDENTIFY DEVICE command (filling a ATA_PASS_THROUGH_EX structure), which you can use to extract word 217 (nominal media rotation rate) which should be set to 0x0001 for non-rotating media including SSDs (but may be misreported by some drives)
  13. Tons of people are moving from ASP.NET to ASP.NET MVC (similar in name but quite a different beast). C# (or VB) is what ASP.NET or ASP.NET MVC uses for server-side code, it's not something you'd use instead of ASP.NET (or ASP.NET MVC) but along with it. Most sites that use Java don't need it installed on the client PCs, they only use it server-wide (JSP or Servlets) to generate the HTML (not those dreaded applets). Not that I would use that personally. @vindhyar: No one can tell you what to use ("XYZ is best so just use that"). It comes down to a lot of factors: -what languages you already know or are familiar with (I wouldn't personally pick Java, but if you already know it...) -what the server can handle (as nice as ASP.NET MVC is, if it has to run on a cheapo LAMP hosting package...) and some more factors... I would personally make sure whatever you pick makes it easy to validate form data both client and server-side (if they can share some of the same code, great), preferably has a decent object relational mapper (or at least has a sane way of doing SQL queries i.e. using parameters -- this is not optional), if possible have scaffolding or code generation from templates to create most of the form markup from the data model, uses a strong MVC (or MVVM) model (separation of concerns) or some other sane n-tier design/architecture, has a decent development environment / IDE / tools, works well with the database you're going to use, having some sort of reporting technology built-in can be quite nice (analyzing & presenting the data), having some sort of user/group "login system" built-in (in a "framework") can be quite a time saver (why reinvent the wheel?) and so on. There really shouldn't be too many options left... Either ways, if you've never done this kind of thing before, I don't think you can quite imagine how much learning there is ahead of you (HTML, CSS, SQL, various techs/languages/tools used and so on) to do this properly (from several months to a few years). It might sound like a weekend project but I can assure you it's not... If you are in such a hurry as you make it seem in the first post (urgent!) then perhaps another tool would be best, or perhaps you may want to find a local business that will do it for you.
  14. That's one of the least bad "prosumer" NAS out there. But that review is a bit optimistic, and doesn't say much in terms of how they tested it. If you were using this in JBOD or RAID5 (with 5 really good drives) and were writing files bigger than it can cache in its 1GB of RAM, you can see the speed dropping to 35MB/s or so, which a far cry from tweaktown's 80-90MB/s results (they must have only been reading/writing small sizes that fit in its RAM cache). Even these guys have it benched at 35MB/s writes and 50MB/s reads (in RAID5). Like any consumer/prosumer NAS, ease of use (networking) is perhaps the main advantage, along with ease of setup. However, speed isn't...
  15. I have to echo allen2's comments: it seems like a really poor choice. You're willing to spend $100+ on a card that's not even guaranteed to work and which in the end will most likely cost you more than a proper "esata enclosure" and take up a lot of time to put together. And in the end you'll be using JBOD which is quite unsafe -- I use RAID0 for my "OS drive" as I couldn't care if that data went missing, but for data? NO way! For the price of the first port multiplier you listed, you could get this. -5 bays with hot-swap trays, and it uses a nice backplane instead of a mess of cables -it has proper cooling (airflow/ventilation) -there is an adequate PSU included -the case is already built and looks nice -it has the necessary highpoint PCI-e controller card -it has a power switch and 7 status LEDs -it works for sure and all of the hardware supported by the vendors I don't think you could come up with anything significantly nicer, or anything cheaper that isn't totally ghetto. I wonder why you just said no to enclosures right from the start... Half decent USB3 drives enclosures, with a USB 3 hub, and a USB 3 controller for your PC wouldn't be any cheaper either, and you'd still have 5 cases and a whole bunch of wires laying around...
  16. 5400rpm isn't that big of a deal, even when performance matters (and it's often not that much of a concern to begin with). Last I checked, they actually outperformed some current 7200rpm drives. Nevermind they're already overkill speed-wise for most "mass storage" needs (how fast does a drive have to be when it's reading a 700mb avi over a 2 hour period, or a 4MB mp3 file every 3 minutes? Even a ATA33/4200rpm drive is almost overkill for lots of such tasks). There's several good 5400rpm drives that outperform "lesser" 7200rpm counterparts in various areas these days, it's just NOT that clear cut. For example, the seek times/access delays of the WD Green drives are better than the Seagate 7200.12 series which are 7200rpm. It's pretty much faster than the Hitachi 7K1000 at basically everything if you compare both with PCMark... There's little difference in performance from a 7200rpm drive in most everyday real-life semi-intensive tasks (like loading a game). Some synthetic benchmarks might put them a few percent behind the very best 7200rpm drives but it's really nowhere near as bad as you make it sound. Sure, they're no verlocirapor or SSD but they're still quite fast -- usually more than you really need. They're also very silent and low power which is great. He can definitely afford a faster drive if he wants though... Edit: It's really not that bad. Just look at the pretty picture they provide us with. I'd say they're pretty clear. Intel makes nice fast CPUs but their chipsets tend to be unimpressive alas.
  17. You mean first hand or something? Because Creative is truly notorious for that (cracking/popping). In fact, I've only ever had this problem with their stuff (I would include any card based on Creative's X-Fi chipset, and using their reference drivers on that list too). Creative's numerous, ongoing issues go back at least a decade. So many major screw ups, it's hard to remember half of it: -XP drivers for the SB Live that weren't a full download. They wanted you to install the Win9x VxD drivers on your OS -- which made it BSOD, and then install the WDM driver "patch" they made you download on top of that. Or they were glad to sell you a CD. Yep, you essentially had to buy the drivers for your brand new card separately! Compatibility patches for their other apps were just that too: just patches (there's no way you could download the full thing in case you lost your CD, even if they only worked with their own hardware like their old "remote center" for their own, proprietary remote control) -It took them forever to have SMP-aware drivers (that wouldn't crash on a dual CPU machine) -SPDIF passthrough was broken in most their SB Live drivers for years -The voltage on the SPDIF output level being out of spec on the SB Live series cards (could damage your amp) -not adhering to the PCI 2.1 spec in older cards -proprietary connectors galore, like the proprietary 3 pole 1/8" minijacks (looks like an earphones jack, but with an extra "ring" on it) and 9 pin minidins used for spdif links on the SB Live 5.1 or on the X-Fi Elite Pro, or those breakout cables with more oddball connectors such as the 26 pin D-Sub AD_LINK on the X-Fi Elite Pro. Should you need an extension or your cat chew it, you're generally SOL. Nevermind the braindead designs like using the mic in jack for the SPDIF out too, so if you used digital speakers or a home theater amp with some cards then you couldn't use a mic at all... -discontinuing support for the very popular SB Live series on Vista, while you could still by the card brand new in lots of stores, or could have bought it like the year before Vista came out, and the overall driver fiasco where another guy (daniel_k) started releasing drivers for their cards instead... and in general being late with drivers for any new OS, and often not delivering what was promised (like ASIO support on the SB Live 5.1) ... Nevermind their business practices e.g. suing competitors like Aureal into oblivion (because they could) and then buying the scraps. But I'm not saying this particular Auzentech card isn't to blame for some of the issues, nor that their support is any better than Creative's. But overall, I've never seen another sound cad company have so many issues with their products. They were quite nice back then (the ISA AWE32, the SB16 and so on -- although the GUS was nice too) but since then I've learned to stay away from them at all costs.
  18. I would personally pick something like this: -a good CPU. While i3's are pretty nice, you can afford a i5. I'd personally pick an unlocked i5 and OC it (although it's more than enough for most tasks at stock speed). A i5 2500K can hit around 4.5GHz.... That's faster than any desktop class Intel or AMD has for sale at *any* price point (including $1000 i7's) for almost any task, and the single-threaded perf would be out of this world. $225 or so. -a cpu cooler that would support a decent OC. Around $50 or so (I don't keep up with those monthly reviews, you'd have to look) -a good P67-based motherboard from a decent OEM, preferably with solid state caps, USB 3 and SATA 6Gbps ports, 4 DIMM slots for sure, and ideally that's good at OC'ing, like a Asus P8P67 at $160 -a vid card with great performance at a decent price point (it's all about value). Something like a Radeon HD 6870 around $200 (more than enough for anyone but the most extreme gamers really) -plenty of decent RAM. Even on an old C2D, 4GB was somewhat limiting for me. 8GB is the min I'd personally buy, but at the current prices 16GB isn't out of the question either (it all depends on what you plan on doing with your PC). In fact, 16GB (4x4GB of decent DDR3 @ 1.5v) should be about $160 -a decent case (solid, good airflow, etc). Being conservative, let's pick an Antec 900, around $100 -a good quality PSU (NOT a no-name cheapo!), definitely 80+ or better, and at that budget you can definitely get something modular too. There's no need for a crazy amount of watts. There's TONS of options out there, but let's pick a very nice Seasonic M12II 520 Bronze ($90) -a good SSD and a large storage drive. RAID0 setups like you mentioned only help so much for performance. Yes, copying large files (sequential access) is pretty much doubled, but seek times/latency/reading random bits of files all over the place isn't -- that's where the SSD helps. The SSD is faster for starting your OS and apps. Then for the storage drive, speed is not crucial. Something like a 128GB Crucial RealSSD C300 ($240, decently sized for a SSD and still quite fast) and a 2TB WD Green ($80) along with it. -any decent DVD writer, they're pretty much all around $20 Everything on the above list should come up to $1300 or so. Some people may will obviously disagree with some points depending on what they do with their PCs. An avid PC gamer would likely get less RAM (no games need that much!) and a faster video card... Plenty of others who don't game much would spent a lot less on the video card and more elsewhere. I think it's one of the fastest (and really high quality) computers you can buy at that price point, without building something with one particular task in mind. Quality parts all-around, super fast CPU/video card, tons of RAM, super fast SSD and lots of storage, great quality modular PSU and everything. It's got it all...
  19. If anything, I'd suspect it's more of a problem with Creative's X-Fi chipset as SB Lives were pretty well known for that kind of problem back then, and there are owners of other X-Fi based cards experiencing the same problems. I just stay the f*** away from anything related in any way to Creative. From somewhat problematic hardware, to braindead designs -- especially in the interconnects department, to some of the very worst drivers I've ever had the displeasure to use and that were very late too, dropping support for popular OS'es on not so old hardware and so on. It's been all-around abysmal. I wouldn't want of a card based on a Creative chipset even for free. Especially if the reference drivers were written by them.
  20. I agree with -X- here. It shouldn't make anywhere near that much difference (20%), even on a netbook class CPU... Except, that's NOT how it works. Like the other point, I very much doubt you've benchmarked this before coming to this conclusion/recommendation. Bigger clusters mean reading less clusters to read the same file indeed, but the bigger clusters are made up of more sectors, making the point entirely moot. Reading a 1MB file made up of 256 4KB clusters of 8 sectors each, or a 1MB file made up of 32 32KB clusters of 64 sectors each still means reading 2048 sectors from disk regardless. There's zero gain there. The only real gain you may have is slightly less fragmentation (if you never defrag and don't have some kind of automatic solution) when your file is made up of less chunks. In fact, in some cases it's going to hurt performance, like when reading a small file (4KB or less e.g. cookies, ini files and what not) or the last cluster of a file which is mostly empty, which will result in reading lots more sectors for nothing and as such hurt performance somewhat. Bigger cluster sizes also means wasting more disk space. Using larger clusters may also affect other things, like reading/writing to the swap file (windows uses 4KB memory pages which matches 4KB clusters perfectly) Do a proper benchmark and you won't bother with it anymore. In a lot of cases, larger clusters will actually slow down your system a bit. The main gain of larger clusters would be for specialized applications, like on a drive storing lots of large files. And even then, I don't personally see enough of a difference to bother there -- even on a video server, where all files are hundreds of MBs each.
  21. As said before, chkdsk was the common way back then. Then checking for space is useless: it'll always return false. MS-DOS 5 only supports FAT12 (for floppies) and FAT16 (for hard drives) which is limited to 2GB* so you'll NEVER have 50GB free. You will never have more than 2GB free (that would be a 100% empty drive of the biggest size you can use). Checking for disk space is completely trivial (I would simply call int 21 with ah=36h and dl=3), but bypassing such a design limitation of FAT16 is just not possible. Even trying to parse the output of chkdsk which simply returns bytes/sector (512) * sectors/cluster (64 max) * amount of free clusters (max 65536) is pointless: the max it could ever display is 2147483648 bytes, and that's still 48GB short of what you need. Your current drive * 2^16 clusters @ 32KB/ea max (64KB clusters not supported) = 2GB max. Edit: unless you're mixing up MB and GB, or aren't using MS-DOS 5, or somehow using another non-native filesystem that's tacked on by hooking interrupts or such, then it's a completely different story. Edit2: why did you say you're using MS-DOS 5 when you're using XP?
  22. Upgrading based on an assumption might not be the best way to go. I would first find out why it's slow, and then try to fix that. It would help to know what you find to be slow in particular (general purpose apps? gaming? browsing the web? encoding video? something else?) As for the speed update, it's not that clear cut either. Many tasks don't really benefit much from it, and even when a particular task does the gain can vary quite a bit depending on which drive you had and which one you're replacing it with. But then again, if it's disk latency making it sluggish then a SSD might be a better pick. People that will blindly tell you "OMG a 7200rpm drive is like, MUCH faster!!1!" tend to just say that because they've heard it and are blindly repeating that, not that they have actual first-hand experience or knowledge about it...
  23. That's why you're getting those errors. The "Program Files" directory should be used for the program's files (as in, executables). You're only supposed to write there when you install or uninstall programs. That's why you get a UAC popup when you add or remove software. Data files however belong in the user profile. That's the way it's been since at least Win2k, but they let you get away with it easier back then (it was less secure/locked down). Mind you, apps coded like this still caused a LOT of troubles, like for users that weren't administrators or that like to ran with lowered privileges (and use runas when necessary). Make it use data files from the user profile as it should, and all the problems (UAC, not working for non-administrators even on XP, etc) will go away completely. Writing data files to "Program Files" is very much against basic "best practices". Such poorly coded apps have caused a lot of headaches for administrators for over a decade: monitor each misbehaving app with filemon or now procmon, then set file permissions per user that needs it, on each machine they use, for each misbehaving app which is a lot of wasted time assuming it is even worth the security risk in the first place (that's when the app doesn't use hardcoded apps too which is mostly unfixable) -- or even better: find an app that isn't poorly coded. They're the reason Microsoft had to came up with file virtualization (redirecting to VirtualStore), the Windows XP mode and so on (yet, it's often "teh evil M$" who gets blamed for it!)
  24. That's hardly what I would call Vista's fault. The same thing will happen with any OS newer than XP (also Win 7, Win 2008, Win 2008 R2, Win 2011 SBS, etc) Most likely, your .dat files are in a location that's not recommended, and as such writing to them (while not elevated) will fail, and your program also fails to check for an error. The simple fix is to make the program use .dat files from a place where you don't need admin rights to write to. Admins by default don't run fully elevated all the time now. That's just an unnecessary security risk. Programs that need admin rights for a good reason popup the UAC dialog to elevate the process. Same thing again. In most common places you won't get this, but if you try to write to places like where the OS is, then yeah, those are protected and the process needs to be elevated. Again, the simple fix is to move the files somewhere else, like your user profile (which has had dedicated locations for such data files for a decade or so). OS'es newer than XP sort of force you into storing files where you should be storing them in the first place, or you have to elevate programs all the time (that's more of a workaround, not a solution). If there's absolutely nothing you can do to fix this program, that it can't be replaced* and that it's vital, then disabling UAC might be an option. It's a lot less secure though (it's pretty much a last resort thing) * if it's just a program to store passwords, then look at Keepass. It's probably a lot better anyway and it works as intended (no UAC popups or anything). It's free too. Plus, we know that encryption is implemented properly (your passwords are safe)
  25. ^^ Just what he said. I'm not a big fan of autoit, but when it comes to automating mouse clicks it's often your best/easiest option by a long shot. Your main other option being a "real" programming language which you would use to send WM_CLICK messages (and a few others), often also using APIs like GetWindowText and SetWindowText with that... Then again, sometimes the easiest thing isn't sending clicks to a GUI but to do something entirely different. Not knowing what the actual problem is, we have no way to help there.
×
×
  • Create New...