bobbert Posted February 23, 2006 Share Posted February 23, 2006 Okay... I know that Decimal is accurate up to 29 digits and that Single and Double can store even larger numbers... what i want to know is: how much accuracy is lost using Single and Double? I think, because they use floating point, S and D become quite inaccurate *scratch* I'm not too sure... if anyone could help it would be a huge helpthanks Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now