Will databases with a large set of records affect read speed? (MySQL) -


our company has been signed franchise provide customers our pos (cash register) application, leading have 3-5 new sign ups every week. create duplicate copy of application's database every new customer, can tell approach end crushing us. thinking maybe it'd best use 1 database clinics identified form of client id on each table.

now got bit of debate on topic: 1 coworker saying having large table 50,000 make things slow, , we'd have optimize entire application. coworker saying mysql designed handle large databases, , long specify client id in each queries clause you'll receive subset of data, practically no change in speed.

will selecting subset of data large table (100,000+) have significant speed difference selecting same number of rows smaller table?

also, recommendations on way go database design appreciated.

data size would affect performance, proper indexing, lookups in 50-100k records should not problem. production database work (~1.25 million records) takes negligible time (less 0.005 sec) record primary key.

still depends on actual use; might end having optimize queries , add indexes.


Comments

Popular posts from this blog

java - Play! framework 2.0: How to display multiple image? -

gmail - Is there any documentation for read-only access to the Google Contacts API? -

php - Controller/JToolBar not working in Joomla 2.5 -