MySQL code解析

来源:互联网 发布:sass编译是什么源码 编辑:程序博客网 时间:2024/06/07 17:29
# Query入口
MySQL的代码比较messy,为了清楚理解MySQL的已有功能,本篇文章从MySQL的query层面进行分析,目的是理清MySQL的query处理流程和代码架构。本文基于MySQL5.7版本。


MySQL的入口代码的skeloton可以参见官方文档:https://dev.mysql.com/doc/internals/en/guided-tour-skeleton.html
本文着重讲解query parse和optimize的部分。个人认为官方文档对入口代码的说明言简意赅,本文也将采用类似的形式进行代码拆解和分析。


我们首先延续官方文档中给出的入口函数(位于sql_parse.cc下):   
```
  void mysql_execute_command(THD *thd)
       switch (lex->sql_command) {
        case SQLCOM_SELECT: ...
        case SQLCOM_SHOW_ERRORS: ...
        case SQLCOM_CREATE_TABLE: ...
        case SQLCOM_UPDATE: ...
        case SQLCOM_INSERT: ...                   // !
        case SQLCOM_DELETE: ...
        case SQLCOM_DROP_TABLE: ...
  case SQLCOM_PREPARE:
    {
    mysql_sql_stmt_prepare(thd);
    break;
    }
    case SQLCOM_EXECUTE:
    {
    mysql_sql_stmt_execute(thd);
    break;
    }
       }
```
注意,在MySQL5.7中mysql_execute_command是由mysql_parse调用的。官方文档中调用函数为另一个(mysql_stmt_execute),但他们同为dispatch_command下调用的,所以官方文档仍然可以借鉴。


# Prepare Statement
上述入口代码上我们可以看到两处和Prepare Statement相关的地方mysql_sql_stmt_prepare和mysql_sql_stmt_execute。


## mysql_sql_stmt_prepare


我们先来看看mysql_sql_stmt_prepare做了什么事。如下所示,mysql_sql_stmt_prepare主要做了?件事。


```
stmt->set_name(name);
thd->stmt_map.insert(thd, stmt);
stmt->m_prepared_stmt= MYSQL_CREATE_PS(stmt, stmt->id,
                                       thd->m_statement_psi,
                                       stmt->name().str, stmt->name().length,
                                       query, query_len);
stmt->prepare(query, query_len);
```
这里的MYSQL_CREATE_PS对应了函数inline_mysql_create_prepared_stmt(IDENTITY, ID, LOCKER, NAME, NAME_LENGTH, SQLTEXT, SQLTEXT_LENGTH),其位于include/mysql/psi/mysql_ps.h:


```
inline_mysql_create_prepared_stmt(void *identity, uint stmt_id,
                                  PSI_statement_locker *locker,
                                  const char *stmt_name, size_t stmt_name_length,
                                  const char *sqltext, size_t sqltext_length)
{
  if (locker == NULL)
    return NULL;
  return PSI_PS_CALL(create_prepared_stmt)(identity, stmt_id, 
                                           locker,
                                           stmt_name, stmt_name_length,
                                           sqltext, sqltext_length);
}
```
我们可以看到,PSI_PS_CALL对应的执行的函数是create_prepared_stmt,该函数仅仅分配了prepare statement所需的各个数据结构。该函数其位于storage/perfschema/pfs_prepared_stmt.cc。


真正**对query进行parse的是stmt->prepare(query, query_len)**。这个stmt是其声明时定义的Prepared_statement。Prepared_statement这个类中有一个LEX成员变量lex,其保存了parse tree。注意:凡是看到LEX结构,就要想到其对应了parse tree。在源码中也指明了它是parse tree descriptor。prepare statement最重要的调用是parse_sql(thd, &parser_state, NULL)来解析sql语句生成parse tree。parse_sql调用的是sql_yacc.cc中的MYSQLparse来做的sql解析。


**注意,最后的结果都是在THD中。任何的数据结构想要访问query解析的结果,其都应该包含THD,通过THD来访问。**


## mysql_sql_stmt_execute


```
void mysqld_stmt_execute(THD *thd, ulong stmt_id, ulong flags, uchar *params,
                         ulong params_length)
{
Prepared_statement *stmt;
  stmt->execute_loop(&expanded_query, open_cursor, params,
                    params + params_length);
}
```
这里列出了最重要的内容,即执行函数execute_loop。代码中对exectue_loop的解释如下:


```
/**
  Execute a prepared statement. Re-prepare it a limited number
  of times if necessary.


  Try to execute a prepared statement. If there is a metadata
  validation error, prepare a new copy of the prepared statement,
  swap the old and the new statements, and try again.
  If there is a validation error again, repeat the above, but
  perform no more than MAX_REPREPARE_ATTEMPTS.


  @note We have to try several times in a loop since we
  release metadata locks on tables after prepared statement
  prepare. Therefore, a DDL statement may sneak in between prepare
  and execute of a new statement. If this happens repeatedly
  more than MAX_REPREPARE_ATTEMPTS times, we give up.


  @return TRUE if an error, FALSE if success
  @retval  TRUE    either MAX_REPREPARE_ATTEMPTS has been reached,
                   or some general error
  @retval  FALSE   successfully executed the statement, perhaps
                   after having reprepared it a few times.
*/
Prepared_statement::execute_loop(String *expanded_query,
                                 bool open_cursor,
                                 uchar *packet,
                                 uchar *packet_end)
{
...
reexecute:
error= execute(expanded_query, open_cursor) || thd->is_error();
    if (error) {
error= reprepare();
}


if (! error)                                /* Success */
      goto reexecute;
}
```
上述代码中真正执行的函数是execute(expanded_query, open_cursor)。


# Query Parsing
接下来让我们看看query是如何被parse的。上面我们分析了execute command,在执行execute command之前,一个SQL query首先会被放入mysql_parse()进行语法解析:


The mysql_parse() Function




```
/*
Parse a query.
@param thd Current thread
@param rawbuf Begining of the query text
@param length Length of the query text
@param[out] found_semicolon For multi queries, position of the character of
the next query in the query text.
*/
void mysql_parse(THD *thd, char *rawbuf, uint length,
Parser_state *parser_state)
{
int error __attribute__((unused));
...
if (**query_cache_send_result_to_client**(thd, rawbuf, length) <= 0)
{
LEX *lex= thd->lex;
...
**bool err= parse_sql(thd, parser_state, NULL);
...
error= mysql_execute_command(thd);**
...
}
...
}
else
{
/*
Query cache hit. We need to write the general log here.
Right now, we only cache SELECT results; if the cache ever
becomes more generic, we should also cache the rewritten
query string together with the original query string (which
we'd still use for the matching) when we first execute the
query, and then use the obfuscated query string for logging
here when the query is given again.
*/
thd->m_statement_psi= MYSQL_REFINE_STATEMENT(thd->m_statement_psi,
sql_statement_info[SQLCOM_SELECT].m_key);
if (!opt_log_raw)
general_log_write(thd, COM_QUERY, thd->query(), thd->query_length());
parser_state->m_lip.found_semicolon= NULL;
}
...
}
```


上面代码中的黑色部分,我们看到了已经分析过的mysql_execute_command。在这之前就是parse_sql。主要,在parse_sql之前,有一个函数query_cache_send_result_to_client,这个函数用来检查query cache。也就是说,如果之前有一个一模一样的query(包括参数哦),那么其结果有可能已经缓存了,如果是这样,就直接返回结果。


让我们再回到parser。MySQL对query的parse是用BISON(sql_yacc.yy)来做的,实际上就是根据MySQL的语法规则,把SQL进行解析存在LEX结构体中。接下来的章节我们会介绍LEX结构体。注意,不要修改sql_yacc.cc, sql_yacc.h和lex_hash.h,这几个文件是自动生成的。


# Preparing for Optimization
从MySQL的代码和官方文档中是不能严格区分parser和optimizer的边界的。但可以肯定的是sql命令的routing是属于optimier的。通过上面进行parse之后,query就要进入mysql_execute_command()。看到这个函数,是不是很熟悉啊。没错,上面我们提到的perpare statement就是这个函数进行routing的。对应的slq_command存在lex->sql_command里。上面我们已经分析过了prepare statement,现在我们看看正常的SELECT语句是怎么做的。


```
int
mysql_execute_command(THD *thd)
{
switch (lex->sql_command) {
...
case SQLCOM_SHOW_STATUS_PROC:
case SQLCOM_SHOW_STATUS_FUNC:
case SQLCOM_SHOW_DATABASES:
case SQLCOM_SHOW_TABLES:
...
**case SQLCOM_SELECT:
{
thd->status_var.last_query_cost= 0.0;
thd->status_var.last_query_partial_plans= 0;
if ((res= select_precheck(thd, lex, all_tables, first_table)))
break;
res= execute_sqlcom_select(thd, all_tables);
break;
}**
...
}
```


这里又出现了precheck,不过和上面的check不同,这里的precheck是检查用户权限的。检查完权限就要执行SELECT了。让我们看看具体是怎么做的:


The execute_sqlcom_command() Function


```
static bool execute_sqlcom_select(THD *thd, TABLE_LIST *all_tables)
{
LEX *lex= thd->lex;
select_result *result= lex->result;
bool res;
/* assign global limit variable if limit is not given */
{
SELECT_LEX *param= lex->unit.global_parameters;
if (!param->explicit_limit)
param->**select_limit**=
new Item_int((ulonglong) thd->variables.select_limit);
}
if (!(res= open_and_lock_tables(thd, all_tables, 0)))
{
if (lex->describe)
{
/*
We always use select_send for **EXPLAIN**, even if it's an EXPLAIN
for SELECT ... INTO OUTFILE: a user application should be able
to prepend EXPLAIN to any query and receive output for it,
even if the query itself redirects the output.
*/
if (!(result= new select_send()))
return 1; /* purecov: inspected */
res= explain_query_expression(thd, result);
delete result;
}
else
{
if (!result && !(result= new select_send()))
return 1; /* purecov: inspected */
select_result *save_result= result;
select_result *analyse_result= NULL;
if (lex->proc_analyse)
{
if ((result= analyse_result=
new select_analyse(result, lex->proc_analyse)) == NULL)
return true;
}
res= handle_select(thd, result, 0); /*目前是handle_query
delete analyse_result;
if (save_result != lex->result)
delete save_result;
}
}
return res;
}
```


select_limit实际上是where clause;lex->describe实际上是EXPLAIN语句;handle_select(handle_query)实际上是mysql_select()的一个wrapper:


```
handle_select() 
{
...
res= **mysql_select**(thd,
select_lex->table_list.first,
select_lex->with_wild, select_lex->item_list,
select_lex->where,
&select_lex->order_list,
&select_lex->group_list,
select_lex->having,
select_lex->options | thd->variables.option_bits |
setup_tables_done_option,
result, unit, select_lex);
...
}
```
接下来,就要真正进入optimizer了!激动吧:-)


# Optimizing the Query
## Prepare


```
bool mysql_select(THD *thd,
TABLE_LIST *tables, uint wild_num, List<Item> &fields,
Item *conds, SQL_I_List<ORDER> *order, SQL_I_List<ORDER> *group,
Item *having, ulonglong select_options,
select_result *result, SELECT_LEX_UNIT *unit,
SELECT_LEX *select_lex)
{
bool free_join= true;
uint og_num= 0;
ORDER *first_order= NULL;
ORDER *first_group= NULL;
DBUG_ENTER("mysql_select");
if (order)
{
og_num= order->elements;
first_order= order->first;
}
if (group)
{
og_num+= group->elements;
first_group= group->first;
}
if (**mysql_prepare_select**(thd, tables, wild_num, fields,
conds, og_num, first_order, first_group, having,
select_options, result, unit,
select_lex, &free_join))
{
if (free_join)
{
THD_STAGE_INFO(thd, stage_end);
(void) select_lex->cleanup();
}
DBUG_RETURN(true);
}
if (! thd->lex->is_query_tables_locked())
{
/*
If tables are not locked at this point, it means that we have delayed
this step until after the prepare stage (i.e. this moment). This allows us to
do better partition pruning and avoid locking unused partitions.
As a consequence, in such a case, the prepare stage can rely only on
metadata about tables used and not data from them.
We need to lock tables now in order to proceed with the remaining
stages of query optimization and execution.
*/
if (**lock_tables**(thd, thd->lex->query_tables, thd->lex->table_count, 0))
{
if (free_join)
{
THD_STAGE_INFO(thd, stage_end);
(void) select_lex->cleanup();
}
DBUG_RETURN(true);
}
/*
Only register query in cache if it tables were locked above.
Tables must be locked before storing the query in the query cache.
Transactional engines must have been signalled that the statement started,
which external_lock signals.
*/
query_cache_store_query(thd, thd->lex->query_tables);
}
DBUG_RETURN(**mysql_execute_select**(thd, select_lex, free_join));
```


现在让我们看看mysql_select的代码,如上所示。真正执行optimize的部分在最后的mysql_execute_select。在这之前,有一个准备动作是mysql_prepare_select,它的作用是分配optimizer要用到的JOIN。注意,在执行mysql_execute_select,MySQL给table上了锁:lock_tables。


在5.7中,对应的代码是:


```
bool handle_query() {
...
select->prepare(thd);
...
select->optimize(thd);
...
select->join->exec();
}
```


## Optimize
接下来就是mysql_execute_select了(optimize),MySQL使用了practice based和cost-model based相结合的方式来做optimization。我们先来看看practice based都包含哪些。


## Practice based
**Constant Propagation**。例如column1=7 AND column2=column1,就会被optimize成column1=7 AND column2=7。
**Dead code elimination**。例如column1 = 12 AND column2 = 13 AND column1 < column2,就会被optimize成column1 = 12 AND column2 = 13。
**Range queries**。例如column1 IN (1, 2, 3)会被optimize成column1 = 1 OR column1 = 2 OR column1 = 3。


其他的practice还有什么呢?其实在JOIN里包含了很多很多。大部分资料都没有具体提及,想研究的朋友可以去看source code。
## Cost-model baesd


The mysql_execute_select() Function


```
/*
Execute stage of mysql_select.
@param thd thread handler
@param select_lex the only SELECT_LEX of this query
@param free_join if join should be freed
@return Operation status
@retval false success
@retval true an error
@note tables must be opened and locked before calling mysql_execute_select.
*/
static bool
mysql_execute_select(THD *thd, SELECT_LEX *select_lex, bool free_join)
{
bool err;
JOIN* join= select_lex->join;
DBUG_ENTER("mysql_execute_select");
DBUG_ASSERT(join);
if ((err= join->**optimize**()))
{
goto err; // 1
}
if (thd->is_error())
goto err;
if (join->select_options & SELECT_DESCRIBE)
{
join->explain();
free_join= false;
}
else
join->**exec**();
err:
if (free_join)
{
THD_STAGE_INFO(thd, stage_end);
err|= select_lex->cleanup();
DBUG_RETURN(err || thd->is_error());
}
DBUG_RETURN(join->error);
}
```


上面代码中的join->optimize()就是要做两件事:practice based optimize和cost-model based optimize。如上面代码中的黑色部分,做完optimize之后,就到了exec阶段。exec也会用一些practice based的方法来加速执行速度,比如遇到ORDRE BY和DISTINCT会被分配到专门的处理函数来做。


接下来让我们看看join::exec()做了什么:


```
The join::exec() Function
{
Opt_trace_context * const trace= &thd->opt_trace;
Opt_trace_object trace_wrapper(trace);
Opt_trace_object trace_exec(trace, "join_execution");
trace_exec.add_select_number(select_lex->select_number);
Opt_trace_array trace_steps(trace, "steps");
List<Item> *columns_list= &fields_list;
DBUG_ENTER("JOIN::exec");
THD_STAGE_INFO(thd, stage_sending_data);
DBUG_PRINT("info", ("%s", thd->proc_info));
result->**send_result_set_metadata**(*fields,
Protocol::SEND_NUM_ROWS | Protocol::SEND_EOF);
error= **do_select**(this);
/* Accumulate the counts from all join iterations of all join parts. */
thd->**inc_examined_row_count**(examined_rows);
DBUG_PRINT("counts", ("thd->examined_row_count: %lu",
(ulong) thd->get_examined_row_count()));
DBUG_VOID_RETURN;
}
```


这里,do_select是用来读取表中的数据(real data)。如下所示:


```
static int
do_select(JOIN *join)
{
int rc= 0;
enum_nested_loop_state error= NESTED_LOOP_OK;
DBUG_ENTER("do_select");
...
else
{
JOIN_TAB *join_tab= join->join_tab + join->const_tables;
DBUG_ASSERT(join->tables);
error= join->first_select(join,join_tab,0);
if (error >= NESTED_LOOP_OK)
error= join->**first_select**(join,join_tab,1);
}
join->thd->limit_found_rows= join->send_records;
/* Use info provided by filesort. */
if (join->order)
{
// Save # of found records prior to cleanup
JOIN_TAB *sort_tab;
JOIN_TAB *join_tab= join->join_tab;
uint const_tables= join->const_tables;
// Take record count from first non constant table or from last tmp table
if (join->tmp_tables > 0)
sort_tab= join_tab + join->tables + join->tmp_tables - 1;
else
{
DBUG_ASSERT(join->tables > const_tables);
sort_tab= join_tab + const_tables;
}
if (sort_tab->filesort &&
sort_tab->filesort->sortorder)
{
join->thd->limit_found_rows= sort_tab->records;
}
}
{
/*
The following will unlock all cursors if the command wasn't an
update command
*/
join->join_free(); // Unlock all cursors
}


fisrt_select会调用sub_select来真正读取数据,如下:


The sub_select() Function
{
DBUG_ENTER("sub_select");
join_tab->table->null_row=0;
if (end_of_records)
{
enum_nested_loop_state nls=
(*join_tab->next_select)(join,join_tab+1,end_of_records);
DBUG_RETURN(nls);
}
**READ_RECORD** *info= &join_tab->read_record;
...
join->thd->get_stmt_da()->reset_current_row_for_warning();
enum_nested_loop_state rc= NESTED_LOOP_OK;
bool in_first_read= true;
while (rc == NESTED_LOOP_OK && join->return_tab >= join_tab)
{
int error;
if (in_first_read)
{
in_first_read= false;
error= (*join_tab->**read_first_record**)(join_tab);
}
else
error= info->**read_record**(info);
DBUG_EXECUTE_IF("bug13822652_1", join->thd->killed= THD::KILL_QUERY;);


```
**本函数的主要功能是initialize tables,然后顺序的一条条读数据。**READ_RECORD结构就用来放数据的。到此为止,整个query解析,optimize和执行流程分析完毕。上面我们特意提到了LEX的结构用来存放parse后的plan


# Optimize小结
## JOIN::prepare


* 初始化JOIN structure,并且把JOIN structure链接到st_select_lex.
* fix_fields() for all items (执行了fix_fields()之后,我们就知道了item的所有信息).
* 把HAVING移动到WHERE if possible.


## JOIN::optimize


* 执行optimize。
* 第一个temporary table有可能被创建。


## JOIN::exec


执行select (第二个temporary table有可能被创建)。


## JOIN::cleanup


* 删除所有temporary tables
* 删除所有其他临时变量


## JOIN::reinit


Prepare all structures for execution of SELECT (with JOIN::exec).


# 其他数据结构
count相关:
count_field_types
JOIN::make_join_plan()是用来初始化QEP的,具体来说是初始化JOIN_TAB。在JOIN::make_join_plan()里面有一个get_best_combination()


```
class JOIN : **
{
  /**
    Optimal query execution plan. Initialized with a tentative plan in
    JOIN::make_join_plan() and later replaced with the optimal plan in
    get_best_combination().
  */
  JOIN_TAB *join_tab;
  /// Array of QEP_TABs
  QEP_TAB *qep_tab;


  /**
    Array of plan operators representing the current (partial) best
    plan. The array is allocated in JOIN::make_join_plan() and is valid only
    inside this function. Initially (*best_ref[i]) == join_tab[i].
    The optimizer reorders best_ref.
  */
  JOIN_TAB **best_ref;
  JOIN_TAB **map2table;    ///< mapping between table indexes and JOIN_TABs
  
    /**
    The cost of best complete join plan found so far during optimization,
    after optimization phase - cost of picked join order (not taking into
    account the changes made by test_if_skip_sort_order()).
  */
  double   best_read;
  /**
    The estimated row count of the plan with best read time (see above).
  */
  ha_rows  best_rowcount;
}
```


Query optimization plan node.
```
class JOIN_TAB
{
  /*
    Number of records that will be scanned (yes scanned, not returned) by the
    best 'independent' access method, i.e. table scan or QUICK_*_SELECT)
  */
  ha_rows       found_records;
  /*
    Cost of accessing the table using "ALL" or range/index_merge access
    method (but not 'index' for some reason), i.e. this matches method which
    E(#records) is in found_records.
  */
  ha_rows       read_time;
 }
```


如果没有where的查询,在sql_executor.cc中:


```
evaluate_join_record(JOIN *join, QEP_TAB *const qep_tab)
{
    /*
      There is no condition on this join_tab or the attached pushed down
      condition is true => a match is found.
    */
    while (qep_tab->first_unmatched != NO_PLAN_IDX && found)
    {
```


读取数据调用关系如下。注意,这里的READ_RECORD作为uchar buf被传进去,一条记录读出来之后会被填入到这个buf中。


```
do_select()
-->error= join->first_select(join,qep_tab,0);
==sub_select(JOIN *join, QEP_TAB *const qep_tab,bool end_of_records)
READ_RECORD *info= &qep_tab->read_record;
-->info->read_record(info)
   -->join_read_next(READ_RECORD *info)
      -->ha_index_next(uchar * buf) 
         -->index_next(buf)
```
也就是说,读出来的数据被放在了qep_tab->read_record中。真正读取函数位于ha_innodb.cc:


```
ret = row_search_mvcc(
buf, PAGE_CUR_UNSUPP, m_prebuilt, match_mode,
direction);
```
对于table scan的结果为两种:


```
DB_SUCCESS
DB_END_OF_INDEX
```


range_scan:
int get_type() const { return QS_TYPE_RANGE; }


get_quick_keys(PARAM *param,QUICK_RANGE_SELECT *quick,KEY_PART *key,
      SEL_ARG *key_tree, uchar *min_key,uint min_key_flag,
      uchar *max_key, uint max_key_flag)


/** Searches for rows in the database using cursor.
Function is mainly used for tables that are shared accorss connection and
so it employs technique that can help re-construct the rows that
transaction is suppose to see.
It also has optimization such as pre-caching the rows, using AHI, etc.


@param[out] bufbuffer for the fetched row in MySQL format
@param[in] modesearch mode PAGE_CUR_L
@param[in,out] prebuiltprebuilt struct for the table handler;
this contains the info to search_tuple,
index; if search tuple contains 0 field then
we position the cursor at start or the end of
index, depending on 'mode'
@param[in] match_mode0 or ROW_SEL_EXACT or ROW_SEL_EXACT_PREFIX
@param[in] direction0 or ROW_SEL_NEXT or ROW_SEL_PREV;
Note: if this is != 0, then prebuilt must has a
pcur with stored position! In opening of a
cursor 'direction' should be 0.
@return DB_SUCCESS or error code */
dberr_t
row_search_mvcc(
byte* buf,
page_cur_mode_tmode,
row_prebuilt_t*prebuilt,
ulint match_mode,
ulint direction){}

row_update_for_mysql_using_upd_graph(

clust_index = dict_table_get_first_index(table);


if (prebuilt->pcur->btr_cur.index == clust_index) {
btr_pcur_copy_stored_position(node->pcur, prebuilt->pcur);
} else {
btr_pcur_copy_stored_position(node->pcur,
     prebuilt->clust_pcur);
}


/**
  Read row via random scan from position.


  @param[out] buf  Buffer to read the row into
  @param      pos  Position from position() call


  @return Operation status
    @retval 0     Success
    @retval != 0  Error (error code returned)
*/


int handler::ha_rnd_pos(uchar *buf, uchar *pos)
{
  int result;
  DBUG_ENTER("handler::ha_rnd_pos");
  DBUG_ASSERT(table_share->tmp_table != NO_TMP_TABLE ||
              m_lock_type != F_UNLCK);
  /* TODO: Find out how to solve ha_rnd_pos when finding duplicate update. */
  /* DBUG_ASSERT(inited == RND); */


  // Set status for the need to update generated fields
  m_update_generated_read_fields= table->has_gcol();


  MYSQL_TABLE_IO_WAIT(PSI_TABLE_FETCH_ROW, MAX_KEY, result,
    { result= rnd_pos(buf, pos); })
  if (!result && m_update_generated_read_fields)
  {
    result= update_generated_read_fields(buf, table);
    m_update_generated_read_fields= false;
  }
  DBUG_RETURN(result);
}


```
JOIN::optimize()
make_join_readinfo(JOIN *join, uint no_jbuf_after)
QEP_TAB::pick_table_access_method(const JOIN_TAB *join_tab)--> 这里把function名赋值给了read_first_record,function名是join_init_read_record
之后,在执行的时候sub_select会调用read_first_record,即执行join_init_read_record:
join_init_read_record(QEP_TAB *tab)-->
init_read_record-->
static int rr_quick(READ_RECORD *info)
--QUICK_RANGE_SELECT::get_next()
   -->handler::multi_range_read_next
      -->handler::read_range_first
         -->ha_index_read_map
            -->index_read
```
原创粉丝点击