基于SpringAOP的数据库读写分离实现
来源:互联网 发布:spss19.0数据分析教程 编辑:程序博客网 时间:2024/04/29 06:10
之前有个项目需要实现数据库读写分离,实现以后今天跟大家分享一下
demo需求:
分享的demo是为了实现对用户表的查询操作对从库操作,增删改对主库操作。
demo实现:
基于SpringAOP的实现方式有多种,可以使用aspectj直接编码,也可以使用spring+aspectj的配置方式,此处我应用的是后者,更简单明了。
- 依赖
首先是需要的maven依赖
我使用的spring版本:
<spring.version>4.2.2.RELEASE</spring.version>
<!-- SPRING begin --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${spring.version}</version> </dependency> <!-- <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context-support</artifactId> <version>${spring.version}</version> </dependency> --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-expression</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-aop</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-orm</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-jdbc</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-aspects</artifactId> <version>${spring.version}</version> </dependency> <!-- SPRING end -->
注意spring-aspects包的引入,别遗漏了。
- 工程
我新建一个springmvc的工程
spring-mvc.xml文件内容:
<?xml version="1.0" encoding="UTF-8"?><beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:context="http://www.springframework.org/schema/context" xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation=" http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.2.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.1.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.2.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd"> <description>SpringMvc Configuration</description> <!-- 注解扫描 --> <context:component-scan base-package="com.smqi"/> <!-- 开启面向切面编程--> <aop:aspectj-autoproxy/> <!--添加注解驱动--> <mvc:annotation-driven content-negotiation-manager="contentNegotiationManager"> <mvc:message-converters register-defaults="false"> <bean id="fastJsonHttpMessageConverter" class="com.alibaba.fastjson.support.spring.FastJsonHttpMessageConverter"> <property name="supportedMediaTypes"> <list> <value>text/html;charset=UTF-8</value> <value>application/json;charset=UTF-8</value> </list> </property> </bean> </mvc:message-converters> </mvc:annotation-driven> <bean id="contentNegotiationManager" class="org.springframework.web.accept.ContentNegotiationManagerFactoryBean"> <property name="favorPathExtension" value="false" /> <property name="favorParameter" value="false" /> <property name="ignoreAcceptHeader" value="false" /> <property name="mediaTypes"> <value> atom=application/atom+xml html=text/html json=application/json *=*/* </value> </property> </bean> <mvc:view-controller path="/" view-name="redirect:/index/home"/> <mvc:resources location="/content/" mapping="/content/**" /> <!-- 开启事务注解驱动 --> <tx:annotation-driven/> <bean class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <beans:property name="viewClass" value="org.springframework.web.servlet.view.JstlView"></beans:property> <beans:property name="prefix" value="/WEB-INF/view/"></beans:property> <beans:property name="suffix" value=".jsp"></beans:property> </bean> <bean class="org.springframework.web.servlet.handler.SimpleMappingExceptionResolver"> <property name="exceptionMappings"> <props> <prop key="org.apache.shiro.authz.UnauthorizedException"> error/403 </prop> </props> </property></bean></beans>
这里强调的是<aop:aspectj-autoproxy/>
的配置不可少,而且应该放在mvc配置文件下,作为DispatcherServlet启动时的初始化参数,否则aop会失效,无法切入到后面修改数据源的方法中去
spring-database.xml
<?xml version="1.0" encoding="UTF-8"?><beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.2.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.2.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.2.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd" default-lazy-init="true"> <description>DataSource Configuration</description> <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="locations"> <array> <value>classpath:config/properties/jdbc.properties</value> </array> </property> </bean> <!-- 主(写)库数据源配置 --> <!-- 常态数据源配置 --> <bean id="masterDataSource1" class="com.alibaba.druid.pool.DruidDataSource" destroy-method="close"> <property name="url" value="${jdbc.url}"/> <property name="username" value="${jdbc.username}"/> <property name="password" value="${jdbc.password}"/> <!-- 初始化时建立物理连接的个数。初始化发生在显示调用init方法,或者第一次getConnection时 --> <property name="initialSize" value="${jdbc.initialSize}"/> <!-- 最小连接池数量 --> <property name="minIdle" value="${jdbc.minIdle}"/> <!-- 最大连接池数量 --> <property name="maxActive" value="${jdbc.maxActive}"/> <!-- 有两个含义:1) Destroy线程会检测连接的间隔时间 2) testWhileIdle的判断依据,详细看testWhileIdle属性的说明 --> <property name="timeBetweenEvictionRunsMillis" value="${jdbc.timeBetweenEvictionRunsMillis}"/> <!-- 连接池中连接,在时间段内一直空闲, 被逐出连接池的时间(默认为30分钟) --> <property name="minEvictableIdleTimeMillis" value="${jdbc.minEvictableIdleTimeMillis}"/> <!-- 用来检测连接是否有效的sql,要求是一个查询语句。如果validationQuery为null,testOnBorrow、testOnReturn、testWhileIdle都不会其作用。 --> <property name="validationQuery" value="${jdbc.validationQuery}"/> <!-- 建议配置为true,不影响性能,并且保证安全性。申请连接的时候检测,如果空闲时间大于timeBetweenEvictionRunsMillis,执行validationQuery检测连接是否有效。 --> <property name="testWhileIdle" value="${jdbc.testWhileIdle}"/> <!-- 申请连接时执行validationQuery检测连接是否有效,做了这个配置会降低性能。 --> <property name="testOnBorrow" value="${jdbc.testOnBorrow}"/> <!-- 归还连接时执行validationQuery检测连接是否有效,做了这个配置会降低性能 --> <property name="testOnReturn" value="${jdbc.testOnReturn}"/> <!-- 要启用PSCache,必须配置大于0,当大于0时,poolPreparedStatements自动触发修改为true。 在Druid中,不会存在Oracle下PSCache占用内存过多的问题,可以把这个数值配置大一些,比如说100 --> <property name="maxOpenPreparedStatements" value="${jdbc.maxOpenPreparedStatements}"/> <!-- 对于长时间不使用的连接强制关闭 --> <property name="removeAbandoned" value="${jdbc.removeAbandoned}"/> <!-- 超过指定时间后开始关闭空闲连接 --> <property name="removeAbandonedTimeout" value="${jdbc.removeAbandonedTimeout}"/> <!-- 将当前关闭动作记录到日志 --> <property name="logAbandoned" value="${jdbc.logAbandoned}"/> <!-- 属性类型是字符串,通过别名的方式配置扩展插件, 常用的插件有:监控统计用的filter:stat 日志用的filter:log4j 防御sql注入的filter:wall --> <!--<property name="filters" value="${jdbc.filtes}"/>--> </bean> <!-- 从(读)库数据源配置 --> <!-- 鉴权数据源配置 --> <bean id="slaveDataSource1" class="com.alibaba.druid.pool.DruidDataSource" init-method="init" destroy-method="close"> <property name="url" value="${slave.jdbc.url}"/> <property name="username" value="${slave.jdbc.username}"/> <property name="password" value="${slave.jdbc.password}"/> </bean> <!-- 数据库读写分离配置 start --> <bean id="readWriteDataSource" class="com.smqi.common.dynamicDS.ReadWriteDataSource"> <property name="writeDataSource" ref="masterDataSource1"/> <property name="readDataSourceMap"> <map> <entry key="readDataSource1" value-ref="slaveDataSource1"/> <!--<entry key="readDataSource2" value-ref="slaveDataSource1"/>--> </map> </property> </bean> <!-- 读取分离方法拦截判断 --> <bean id="readWriteDataSourceTransactionProcessor" class="com.smqi.common.dynamicDS.ReadWriteDataSourceProcessor"/> <!-- 事物管理方法选择配置,即读方法不开启新事务或在当前事务中,其余方法开启新事务 --> <tx:advice id="txAdvice" transaction-manager="transactionManager"> <tx:attributes> <tx:method name="save*" propagation="REQUIRED"/> <tx:method name="add*" propagation="REQUIRED"/> <tx:method name="create*" propagation="REQUIRED"/> <tx:method name="insert*" propagation="REQUIRED"/> <tx:method name="update*" propagation="REQUIRED"/> <tx:method name="merge*" propagation="REQUIRED"/> <tx:method name="del*" propagation="REQUIRED"/> <tx:method name="remove*" propagation="REQUIRED"/> <tx:method name="put*" read-only="true"/> <tx:method name="query*" read-only="true"/> <tx:method name="use*" read-only="true"/> <tx:method name="get*" read-only="true"/> <tx:method name="count*" read-only="true"/> <tx:method name="find*" read-only="true"/> <tx:method name="list*" read-only="true"/> <tx:method name="select*" read-only="true"/> <tx:method name="*" propagation="REQUIRED"/> </tx:attributes> </tx:advice> <!-- 事务管理业务切面配置,只对业务逻辑层实施事务--> <aop:config expose-proxy="true"> <!-- 一定范围的业务操作或精确拦截到某些数据操作 --> <!--<aop:pointcut id="txPointcut" expression="(execution(* com.smqi.modules..service.impl..*.*(..))) or (execution(* com.smqi.manage..service.impl..*.*(..)))"/>--> <aop:pointcut id="txPointcut" expression="execution(* com.smqi.modules..service.impl..*.*(..))"/> <!-- 实现事务控制 --> <aop:advisor advice-ref="txAdvice" pointcut-ref="txPointcut"/> <!-- 通过AOP切面实现读/写库选择 --> <aop:aspect order="-2147483648" ref="readWriteDataSourceTransactionProcessor"> <!-- 数据库读/写库选择 --> <aop:around pointcut-ref="txPointcut" method="determineReadOrWriteDB"/> <!-- 操作日志记录,目前只对指定的注解实行记录 --> <!--<aop:around pointcut-ref="txPointcut2" method="doAroundMethodForCtLog"/>--> </aop:aspect> </aop:config> <!-- 数据库读写分离配置 end --> <!-- 事务控制 --> <bean name="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager"> <property name="dataSource" ref="readWriteDataSource"></property> </bean> <!-- spring jdbc --> <bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate"> <constructor-arg ref="readWriteDataSource"></constructor-arg> </bean></beans>
jdbc.properties配置文件如下
#####主数据库jdbc.url=jdbc:mysql://127.0.0.1:3306/masterds?useUnicode=true&characterEncoding=UTF-8jdbc.username=rootjdbc.password=123456#####从数据库slave.jdbc.url=jdbc:mysql://127.0.0.1:3307/slaveds?useUnicode=true&characterEncoding=UTF-8slave.jdbc.username=rootslave.jdbc.password=123456jdbc.initialSize = 1jdbc.minIdle = 1jdbc.maxActive = 40jdbc.timeBetweenEvictionRunsMillis = 60000jdbc.minEvictableIdleTimeMillis = 300000jdbc.validationQuery = SELECT 'x'jdbc.testWhileIdle = truejdbc.testOnBorrow = falsejdbc.testOnReturn = falsejdbc.maxOpenPreparedStatements = -1jdbc.removeAbandoned = truejdbc.removeAbandonedTimeout = 1800jdbc.logAbandoned = true
从配置可以看出,我应用的是本地的两个数据库,怎么安装,参考我的上一篇文章,其实就是为这篇文章服务的哈,当然你也可以指定其他服务器的数据库地址
新建两个数据库,即主(写)库和从(读)库
代码service层和dao层就是简单的增删改查,对数据库进行操作
package com.smqi.modules.demo.dao.impl;import com.smqi.modules.demo.dao.DemoDao;import com.smqi.modules.demo.entity.DemoUser;import org.slf4j.Logger;import org.slf4j.LoggerFactory;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.jdbc.core.JdbcTemplate;import org.springframework.jdbc.core.RowMapper;import org.springframework.stereotype.Repository;import java.sql.ResultSet;import java.sql.SQLException;import java.util.List;/** * Created by smqi on 2016/10/31. */@Repositorypublic class DemoDaoImpl implements DemoDao { protected final Logger logger = LoggerFactory.getLogger(DemoDaoImpl.class); @Autowired private JdbcTemplate jdbcTemplate; public void addDemo() { jdbcTemplate.update("insert into demo_user (name,age) VALUES (\"主数据库\",23)"); } public void getDemo() { DemoUser user = new DemoUser(); String sql = "select * from demo_user where id = ?"; try { user = jdbcTemplate.queryForObject(sql, new Object[]{1}, new RowMapper<DemoUser>() { public DemoUser mapRow(ResultSet resultSet, int i) throws SQLException { DemoUser user = new DemoUser(); user.setId(resultSet.getInt(1)); user.setName(resultSet.getString(2)); user.setAge(resultSet.getInt(3)); return user; } }); } catch (Exception e) { System.out.println("数据库无此用户!"); } System.out.println("获取id=1的用户名称:" + user.getName()); } public void queryDemo() { String sql = "select * from demo_user"; List<DemoUser> userList = jdbcTemplate.query(sql, new Object[]{}, new RowMapper<DemoUser>() { public DemoUser mapRow(ResultSet resultSet, int i) throws SQLException { DemoUser user = new DemoUser(); user.setId(resultSet.getInt(1)); return user; } }); logger.info("共获取用户列表{}条", userList.size()); } public void deleteDemo() { jdbcTemplate.update("DELETE from demo_user where id = 1 "); } public void updateDemo() { jdbcTemplate.update("UPDATE demo_user SET name = '路飞' where id = 1"); }}
核心方法
实现线程内安全的数据源选择,支持事务管理具体实现逻辑可以参看代码(文章末附有链接)
package com.smqi.common.dynamicDS;import org.slf4j.Logger;import org.slf4j.LoggerFactory;import org.springframework.beans.factory.InitializingBean;import org.springframework.jdbc.datasource.AbstractDataSource;import org.springframework.util.CollectionUtils;import javax.sql.DataSource;import java.sql.Connection;import java.sql.SQLException;import java.util.Map;import java.util.Map.Entry;import java.util.concurrent.atomic.AtomicInteger;/** * * 读/写动态选择数据库实现 * 目前实现功能 * 默认按顺序轮询使用读库 * 默认选择写库 * 一写多读、当写时默认读操作到写库、当写时强制读操作到读库 * * @author smqi * @createTime 2016/10/30 14:26 */public class ReadWriteDataSource extends AbstractDataSource implements InitializingBean { private static final Logger log = LoggerFactory.getLogger(ReadWriteDataSource.class); private DataSource writeDataSource; private Map<String, DataSource> readDataSourceMap; private String[] readDataSourceNames; private DataSource[] readDataSources; private int readDataSourceCount; private AtomicInteger counter = new AtomicInteger(1); /** * 设置读库(name, DataSource) * * @param readDataSourceMap */ public void setReadDataSourceMap(Map<String, DataSource> readDataSourceMap) { this.readDataSourceMap = readDataSourceMap; } public void setWriteDataSource(DataSource writeDataSource) { this.writeDataSource = writeDataSource; } public void afterPropertiesSet() throws Exception { if (writeDataSource == null) { throw new IllegalArgumentException("property 'writeDataSource' is required"); } if (CollectionUtils.isEmpty(readDataSourceMap)) { throw new IllegalArgumentException("property 'readDataSourceMap' is required"); } readDataSourceCount = readDataSourceMap.size(); readDataSources = new DataSource[readDataSourceCount]; readDataSourceNames = new String[readDataSourceCount]; int i = 0; for (Entry<String, DataSource> e : readDataSourceMap.entrySet()) { readDataSources[i] = e.getValue(); readDataSourceNames[i] = e.getKey(); i++; } } private DataSource determineDataSource() { if (ReadWriteDataSourceDecision.isChoiceWrite()) { // log.debug("current determine write datasource"); return writeDataSource; } if (ReadWriteDataSourceDecision.isChoiceNone()) { // log.debug("no choice read/write, default determine write datasource"); return writeDataSource; } return determineReadDataSource(); } private DataSource determineReadDataSource() { // 按照顺序选择读库 // TODO 算法改进 int index = counter.incrementAndGet() % readDataSourceCount; if (index < 0) { index = -index; } String dataSourceName = readDataSourceNames[index]; // log.debug("current determine read datasource : {}", dataSourceName); return readDataSources[index]; } public Connection getConnection() throws SQLException { return determineDataSource().getConnection(); } public Connection getConnection(String username, String password) throws SQLException { return determineDataSource().getConnection(username, password); }}
package com.smqi.common.dynamicDS;/** * 读/写动态数据库 决策者 * 根据DataSourceType是write/read 来决定是使用读/写数据库 * 通过ThreadLocal绑定实现选择功能 * * @author smqi * @createTime 2016/10/30 11:13 */public class ReadWriteDataSourceDecision { public enum DataSourceType { write, read; } private static final ThreadLocal<DataSourceType> holder = new ThreadLocal<DataSourceType>(); public static void markWrite() { holder.set(DataSourceType.write); } public static void markRead() { holder.set(DataSourceType.read); } public static void reset() { holder.set(null); } public static boolean isChoiceNone() { return null == holder.get(); } public static boolean isChoiceWrite() { return DataSourceType.write == holder.get(); } public static boolean isChoiceRead() { return DataSourceType.read == holder.get(); }}
package com.smqi.common.dynamicDS;import org.aspectj.lang.ProceedingJoinPoint;import org.slf4j.Logger;import org.slf4j.LoggerFactory;import org.springframework.beans.BeansException;import org.springframework.beans.factory.config.BeanPostProcessor;import org.springframework.core.NestedRuntimeException;import org.springframework.core.PriorityOrdered;import org.springframework.transaction.annotation.Propagation;import org.springframework.transaction.interceptor.NameMatchTransactionAttributeSource;import org.springframework.transaction.interceptor.RuleBasedTransactionAttribute;import org.springframework.transaction.interceptor.TransactionAttribute;import org.springframework.util.PatternMatchUtils;import org.springframework.util.ReflectionUtils;import java.lang.reflect.Field;import java.net.InetAddress;import java.util.HashMap;import java.util.Map;/** * 读/写动态数据库选择处理器 * 通过AOP切面实现读/写选择 * <p> * 1、首先将当前方法 与 根据之前【读/写动态数据库选择处理器】 提取的读库方法 进行匹配 * <p> * 2、如果匹配,说明是读取数据: * 2.1、forceChoiceReadWhenWrite:true,即强制走读库 * 2.2、如果之前是写操作且forceChoiceReadWhenWrite:false,将从写库进行读取 * 2.3、否则,到读库进行读取数据 * <p> * 3、如果不匹配,说明默认将使用写库进行操作 * <p> * 4、配置方式 * <aop:aspect order="-2147483648" ref="readWriteDataSourceTransactionProcessor"> * <aop:around pointcut-ref="txPointcut" method="determineReadOrWriteDB"/> * </aop:aspect> * 4.1、此处order = Integer.MIN_VALUE 即最高的优先级(请参考http://jinnianshilongnian.iteye.com/blog/1423489) * 4.2、切入点:txPointcut 和 实施事务的切入点一样 * 4.3、determineReadOrWriteDB方法用于决策是走读/写库的 * * @author smqi * @createTime 2016/10/30 09:57 */public class ReadWriteDataSourceProcessor implements BeanPostProcessor, PriorityOrdered { private static final Logger log = LoggerFactory .getLogger(ReadWriteDataSourceProcessor.class); private boolean forceChoiceReadWhenWrite = false; private Map<String, Boolean> readMethodMap = new HashMap<String, Boolean>(); /** * 当之前操作是写的时候,是否强制从从库读 默认(false) 当之前操作是写,默认强制从写库读 * * @param forceChoiceReadWhenWrite */ public void setForceChoiceReadWhenWrite(boolean forceChoiceReadWhenWrite) { this.forceChoiceReadWhenWrite = forceChoiceReadWhenWrite; } public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException { if (!(bean instanceof NameMatchTransactionAttributeSource)) { return bean; } try { NameMatchTransactionAttributeSource transactionAttributeSource = (NameMatchTransactionAttributeSource) bean; Field nameMapField = ReflectionUtils.findField(NameMatchTransactionAttributeSource.class, "nameMap"); nameMapField.setAccessible(true); @SuppressWarnings("unchecked") Map<String, TransactionAttribute> nameMap = (Map<String, TransactionAttribute>) nameMapField.get(transactionAttributeSource); for (Map.Entry<String, TransactionAttribute> entry : nameMap.entrySet()) { RuleBasedTransactionAttribute attr = (RuleBasedTransactionAttribute) entry.getValue(); // 仅对read-only的处理 if (!attr.isReadOnly()) { continue; } String methodName = entry.getKey(); Boolean isForceChoiceRead = Boolean.FALSE; if (forceChoiceReadWhenWrite) { // 不管之前操作是写,默认强制从读库读 (设置为NOT_SUPPORTED即可) // NOT_SUPPORTED会挂起之前的事务 attr.setPropagationBehavior(Propagation.NOT_SUPPORTED.value()); isForceChoiceRead = Boolean.TRUE; } else { // 否则 设置为SUPPORTS(这样可以参与到写事务) attr.setPropagationBehavior(Propagation.SUPPORTS.value()); } log.debug("read/write transaction process method:{} force read:{}", methodName, isForceChoiceRead); readMethodMap.put(methodName, isForceChoiceRead); } } } catch (Exception e) { throw new ReadWriteDataSourceTransactionException("process read/write transaction error", e); } return bean; } public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException { return bean; } @SuppressWarnings({"serial", "unused"}) private class ReadWriteDataSourceTransactionException extends NestedRuntimeException { public ReadWriteDataSourceTransactionException(String message, Throwable cause) { super(message, cause); } } public Object determineReadOrWriteDB(ProceedingJoinPoint pjp) throws Throwable { if (isChoiceReadDB(pjp.getSignature().getName())) { ReadWriteDataSourceDecision.markRead(); } else { ReadWriteDataSourceDecision.markWrite(); } try { return pjp.proceed(); } finally { ReadWriteDataSourceDecision.reset(); } } private boolean isChoiceReadDB(String methodName) { String bestNameMatch = null; for (String mappedName : this.readMethodMap.keySet()) { if (isMatch(methodName, mappedName)) { bestNameMatch = mappedName; break; } } Boolean isForceChoiceRead = readMethodMap.get(bestNameMatch); // 表示强制选择 读 库 if (isForceChoiceRead == Boolean.TRUE) { return true; } // 如果之前选择了写库 现在还选择 写库 if (ReadWriteDataSourceDecision.isChoiceWrite()) { return false; } // 表示应该选择读库 if (isForceChoiceRead != null) { return true; } // 默认选择 写库 return false; } protected boolean isMatch(String methodName, String mappedName) { return PatternMatchUtils.simpleMatch(mappedName, methodName); } public int getOrder() { // TODO Auto-generated method stub return 0; } public String getIp() { try { InetAddress ia = InetAddress.getLocalHost(); return ia.getHostAddress(); } catch (Exception e) { e.printStackTrace(); } return null; }}
实现逻辑很清晰,我简单总结一下思路
- 首先是配置文件,定义了切入层,即哪些包需要加入aop切面。
- 数据源通过配置文件注入,默认从主(写)库,从库数据源可以配置多个,同时按一定算法进行选择
- 定义需要通过读库的方法规范,代码里通过read-only里来识别
- 定义线程安全的变量来存放当前数据源选择标识,即write or read,重新实现了DataSource接口的getConnection方法,在实现方法内实现数据源的选择
- 测试结果
主、从库的demo_user表开始都是空的
我们启动项目,执行
http://localhost:8080/mvc/index/home?type=add
因为是写操作,我们查看主数据库发现插入了数据,从数据库依然是空的。
继续执行:
http://localhost:8080/mvc/index/home?type=get
该get方法默认是查询id=1的用户,结果:
我们手动在读库加一条记录如下:
然后再执行下上述的get操作:
综上说明,新增对的是写库,查询的确是从读库查的。
最后,此demo工程的代码我已打成war包提交,可以点击下载,自行测试。
- 基于SpringAOP的数据库读写分离实现
- SpringAOP实现读写分离
- 基于Spring的实现数据库读写分离
- 基于springAop动态切换数据源实现读写分离
- SpringAOP实现mysql读写分离
- 基于nginx的TCP Proxy实现数据库读写分离
- 基于MyCat实现的MySQL读写分离
- 实现数据库读写分离
- Spring 实现对数据库的读写分离
- 1.Spring实现数据库的读写分离
- Spring实现数据库的读写分离
- 用amoeba实现数据库的读写分离
- 数据库的读写分离
- 数据库的读写分离
- 数据库的读写分离
- 数据库的读写分离
- 数据库的读写分离
- 数据库的读写分离
- Django:Model的Filter
- HDOJ 2105 The Center of Gravity(超级水题)
- Android Http请求方法汇总
- 关于直连线和交叉线(小记)
- 关于指针类型转换的思考
- 基于SpringAOP的数据库读写分离实现
- php 递归函数
- 7.java设计模式(读书笔记)原型模式
- shape 实现圆形、圆角矩形
- jquery设置元素的readonly和disabled
- View 中requestLayout 和 invalidate,postinvalidate() 区别
- GPS坐标转换为百度坐标 js封装
- Python的装饰器和with语法
- Selenium学习15--WaitUtil