bobbert Posted February 23, 2006 Posted February 23, 2006 Okay... I know that Decimal is accurate up to 29 digits and that Single and Double can store even larger numbers... what i want to know is: how much accuracy is lost using Single and Double? I think, because they use floating point, S and D become quite inaccurate *scratch* I'm not too sure... if anyone could help it would be a huge helpthanks
Recommended Posts
Please sign in to comment
You will be able to leave a comment after signing in
Sign In Now