Using MySQL 5.0 (JDBC driver - connector/J 5.0.4 version) the cursors are defined, by default, as "client-side cursors". This means, the whole resultant recordset of a SELECT is returned to the client (application) and the paging is done there.
This behavior could produce "out of memory" or "not enough memory" errors because these recordsets, depending on the application, could be HUGE; furthermore each user using a connection allocates memory for each "not closed" select.
To get "server-side cursors" working with MySQL-5.0 and Connector/J, you need to add "useCursorFetch=true" to your JDBC URL configuration parameters and call .setFetchSize(> 0) on a statement that you want to use cursor-based fetching. This can be done using the ?defaultFetchSize? Driver property; which means "server cursor" instead of "client cursor" are defined; then the recordset is not entirely returned to the client (application) but page by page.
"DefaultFetchSize" property defines the "page length"; then adding "defaultFetchSize=150" each page will return 150 rows.
A detailed information of the Driver properties can be checked here.
Advice: Related to MySQL 5.0 and Connector/J driver 5.0.4 some test were made using server-side cursors obtaining some problemas detailed in the SAC # 21338.
The property "useServerPrepStmts" was changed to "false" to workaround the problem which means that the server-side cursor (cursor-based fetch) was disabled.
The next Connector/J version will disable the "useServerPrepStmts" property by default.
So far, using mySQL 5.0 and Connector/J 5.0.4 is possible to use server-cursors but some problems may arrise (for example SAC # 21338); in this case is recommended to change to client-side cursors (default value).
No. Oracle includes the same capabilities.
When Oracle is being used; "defaultRowPrefetch=NNN" property could be included to increase the number of rows per page returned to the client (webapp). So, it's the same idea, the difference is the default value.
Sorry, obviously, the answer is: it's depends.
For performance purposes It's seems to be faster to use "client-side cursors"; caching the results on the client (the entire result set is sent across the network) and spreading the workload across many workstations.
However, enough memory to be allocated is necessary. This could means a HUGE amount of memory; depending on the # of active users (users using a connection), the number of active SELECTs (selects without a close) and the number of rows returned by each active select. Using a ?server-side cursor?, the server uses its own resources to manage the entire result set and only those rows that are selected within the cursor are returned to the client over the network. Server cursors are good for lightweight clients who can't afford to cache a large result set.
To reduce the number of active users (active pool connection) will reduce the amount of memory needed; however could produce "queuing" (waits) for a free connection to be used and the application get slower because of this.
As a conclusion: which type of cursors should be used depends on the application, the number of users, the database size, the SELECT strategy and the available memory. It's a balance between these "facts" and the performance requirements.