Programmers (and ORMs) usually optimize their query-strategy for as few queries and roundtrips as possible - we sometimes forget that the database server may in fact be on a different physical machine, and suddenly the data-volume may influence the response-time more than the number of queries.
Join-statements merely for the sake of getting all the data in one round-trip actually don’t make much sense, when you really think about it - you’re often transfering and traversing loads of duplicate data, putting more load on the network and adding memory overhead.
That’s not really why we have join-statements in the first-place - they exist for situations where you need to obtain a product of some sort - e.g. something involving sum, avg, count, min or max functions. Join queries are not really an optimization-tool, but most ORMs treat them as such.
But there is in fact a way to execute multiple queries in one round-trip, at least with the mysqli-extension:
http://php.net/manual/en/mysqli.multi-query.php
This does not appear to be possible with PDO, which, I’m guess, is why nobody uses it.
But think about it - if you have 3 customers and 50 orders, and you join those, selecting all the fields, you’re actually sending each customer’s data more than 16 times repeatedly. This data has to be serialized by the database-server, transported over the network (which is worse if your database server is on a different physical server from your application-server), then unserialized by the database-client, and then subsequently you have to traverse all that data in PHP (or some of it, but all of it has to go into memory) and weed out the non-unique customer records.
Does that sound efficient?
Much for efficient and simple would be a batch of two queries that select the 50 orders and 3 customers separately.
Do you think that’s possible?