Jump to content

Recommended Posts

Posted

Okay... I know that Decimal is accurate up to 29 digits and that Single and Double can store even larger numbers... what i want to know is: how much accuracy is lost using Single and Double? I think, because they use floating point, S and D become quite inaccurate *scratch* I'm not too sure...

if anyone could help it would be a huge help

thanks


Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...