day8--socketserver回顾

来源:互联网 发布:淘宝联盟手机版官网 编辑:程序博客网 时间:2024/06/03 22:42

    sockeserver主要实现多并发的情况,我们知道,socket只能一对一用户进行交互,如何实现一对多交互,socketserver就是用来解决这个问题的。

    socketserver--共有这么几种类型:

    TCPServer:TCPServer类别的,都是socket。

    UDPServer:适用于UDP协议下。

    1、class socketserver.TCPServer(server_address,RequestHandlerClass,bind_and_activate=True)

    2、class socketserver.UDPServer(server_address,RequestHandlerClass,bind_and_activate=True)

    3、calss sockstserver.UnixStreamServer(server_address,RequestHandlercalss,bind_and_activate=True)

    4、class socketserver.UnixDatagramServer(server_address,RequestHandlerClass,bind_and_activate=True)

    创建一个socketserver 至少分以下几步:

    1、必须创建一个RequsetHandlerClass的类,而且必须重写父类里面handle()方法;

    2、必须声明一个处理请求:

    (1)server.handle_request()    只处理一个请求

    (2)server.serve_forever()     处理多个请求,永远执行。

    基本的socketserver代码

 

import socketserverclass MyTCPHandler(socketserver.BaseRequestHandler):    """    The request handler class for our server.    It is instantiated once per connection to the server, and must    override the handle() method to implement communication to the    client.    """    def handle(self):        # self.request is the TCP socket connected to the client        self.data = self.request.recv(1024).strip()        print("{} wrote:".format(self.client_address[0]))        print(self.data)        # just send back the same data, but upper-cased        self.request.sendall(self.data.upper())if __name__ == "__main__":    HOST, PORT = "localhost", 9999    # Create the server, binding to localhost on port 9999    server = socketserver.TCPServer((HOST, PORT), MyTCPHandler)    # Activate the server; this will keep running until you    # interrupt the program with Ctrl-C    server.serve_forever()

 

    socketserver其实与socket一样,只是socketserver能够实现一对多的交互,下面来看一个简单实例:

    服务器端:

import socketserver

class MyTCPHandler(socketserver.BaseRequestHandler):
"""
The request handler class for our server.

It is instantiated once per connection to the server, and must
override the handle() method to implement communication to the
client.
"""

def handle(self):
# self.request is the TCP socket connected to the client
while True:
self.data = self.request.recv(1024).strip()
if len(self.data) == 0:
break
print("{} wrote:".format(self.client_address[0]))
print(self.data)
print("地址:",self.client_address)
# just send back the same data, but upper-cased
self.request.sendall(self.data.upper())

if __name__ == "__main__":
HOST, PORT = "0.0.0.0", 9994

# Create the server, binding to localhost on port 9999
#server = socketserver.TCPServer((HOST,PORT),MyTCPHandler) #socket方式,与socket一样,一对一
server = socketserver.ThreadingTCPServer((HOST, PORT), MyTCPHandler) #多线程方式进行一对多交互
#server = socketserver.ForkingTCPServer((HOST,PORT),MyTCPHandler) #多进程方式进行一对多交互

# Activate the server; this will keep running until you
# interrupt the program with Ctrl-C
server.serve_forever()

    socketserver的三种方式:

    TCPServer:实现和socket一样的功能,也是一对一;

    ThreadingTCPServer:多线程,一对多的形式;

    ForKingTCPServer:多进程;一对多的形式,多并发。

    客户端:

import socketclass Myclient(object):    def __init__(self):        self.client = socket.socket()    def connect(self,ip,port):        self.client.connect((ip,port))    def interactive(self):        while True:            mess = input(">>:").strip()            if len(mess) == 0:                print("不能发送空的数据")                continue            self.client.send(mess.encode("utf-8"))            data = self.client.recv(1024).decode("utf-8")            print(data)if __name__ == "__main__":    client = Myclient()    client.connect("localhost",9999)    client.interactive()

    socketserver中,所有的功能都封装到了handle()方法里面,上面启动服务器之后,可以启动多个客户端,交流如下:

    服务器端的交互情况:

127.0.0.1 wrote:b'asfda'地址: ('127.0.0.1', 49256)127.0.0.1 wrote:b'gagds'地址: ('127.0.0.1', 49254)127.0.0.1 wrote:b'\xe6\x88\x91\xe4\xbb\xac'地址: ('127.0.0.1', 49254)127.0.0.1 wrote:b'\xe9\x83\xbd\xe6\x98\xaf'地址: ('127.0.0.1', 49254)127.0.0.1 wrote:b'\xe5\xa5\xbd\xe5\x93\x88'地址: ('127.0.0.1', 49254)127.0.0.1 wrote:b'shibushi'地址: ('127.0.0.1', 49252)127.0.0.1 wrote:b'\xe6\x98\xaf\xe5\x91\x80\xef\xbc\x8c\xe9\x83\xbd\xe6\x98\xaf\xe4\xb8\x80\xe6\xa0\xb7'地址: ('127.0.0.1', 49252)

    可以看出,实现了多并发,连接了3个不同的端口。

    self.client_address输出是:('127.0.0.1',49252),可见,self.client_address是有IP和PORT(端口号)组成的。

    socketserver的三种交互方式,刚才还忘记了,一定要知道,如何用socketserver实现多并发。

    socketserver的源代码:

 

"""Generic socket server classes.This module tries to capture the various aspects of defining a server:For socket-based servers:- address family:        - AF_INET{,6}: IP (Internet Protocol) sockets (default)        - AF_UNIX: Unix domain sockets        - others, e.g. AF_DECNET are conceivable (see <socket.h>- socket type:        - SOCK_STREAM (reliable stream, e.g. TCP)        - SOCK_DGRAM (datagrams, e.g. UDP)For request-based servers (including socket-based):- client address verification before further looking at the request        (This is actually a hook for any processing that needs to look         at the request before anything else, e.g. logging)- how to handle multiple requests:        - synchronous (one request is handled at a time)        - forking (each request is handled by a new process)        - threading (each request is handled by a new thread)The classes in this module favor the server type that is simplest towrite: a synchronous TCP/IP server.  This is bad class design, butsave some typing.  (There's also the issue that a deep class hierarchyslows down method lookups.)There are five classes in an inheritance diagram, four of which representsynchronous servers of four types:        +------------+        | BaseServer |        +------------+              |              v        +-----------+        +------------------+        | TCPServer |------->| UnixStreamServer |        +-----------+        +------------------+              |              v        +-----------+        +--------------------+        | UDPServer |------->| UnixDatagramServer |        +-----------+        +--------------------+Note that UnixDatagramServer derives from UDPServer, not fromUnixStreamServer -- the only difference between an IP and a Unixstream server is the address family, which is simply repeated in bothunix server classes.Forking and threading versions of each type of server can be createdusing the ForkingMixIn and ThreadingMixIn mix-in classes.  Forinstance, a threading UDP server class is created as follows:        class ThreadingUDPServer(ThreadingMixIn, UDPServer): passThe Mix-in class must come first, since it overrides a method definedin UDPServer! Setting the various member variables also changesthe behavior of the underlying server mechanism.To implement a service, you must derive a class fromBaseRequestHandler and redefine its handle() method.  You can then runvarious versions of the service by combining one of the server classeswith your request handler class.The request handler class must be different for datagram or streamservices.  This can be hidden by using the request handlersubclasses StreamRequestHandler or DatagramRequestHandler.Of course, you still have to use your head!For instance, it makes no sense to use a forking server if the servicecontains state in memory that can be modified by requests (since themodifications in the child process would never reach the initial statekept in the parent process and passed to each child).  In this case,you can use a threading server, but you will probably have to uselocks to avoid two requests that come in nearly simultaneous to applyconflicting changes to the server state.On the other hand, if you are building e.g. an HTTP server, where alldata is stored externally (e.g. in the file system), a synchronousclass will essentially render the service "deaf" while one request isbeing handled -- which may be for a very long time if a client is slowto read all the data it has requested.  Here a threading or forkingserver is appropriate.In some cases, it may be appropriate to process part of a requestsynchronously, but to finish processing in a forked child depending onthe request data.  This can be implemented by using a synchronousserver and doing an explicit fork in the request handler classhandle() method.Another approach to handling multiple simultaneous requests in anenvironment that supports neither threads nor fork (or where these aretoo expensive or inappropriate for the service) is to maintain anexplicit table of partially finished requests and to use a selector todecide which request to work on next (or whether to handle a newincoming request).  This is particularly important for stream serviceswhere each client can potentially be connected for a long time (ifthreads or subprocesses cannot be used).Future work:- Standard classes for Sun RPC (which uses either UDP or TCP)- Standard mix-in classes to implement various authentication  and encryption schemesXXX Open problems:- What to do with out-of-band data?BaseServer:- split generic "request" functionality out into BaseServer class.  Copyright (C) 2000  Luke Kenneth Casson Leighton <lkcl@samba.org>  example: read entries from a SQL database (requires overriding  get_request() to return a table entry from the database).  entry is processed by a RequestHandlerClass."""# Author of the BaseServer patch: Luke Kenneth Casson Leighton__version__ = "0.4"import socketimport selectorsimport osimport errnotry:    import threadingexcept ImportError:    import dummy_threading as threadingfrom time import monotonic as time__all__ = ["BaseServer", "TCPServer", "UDPServer", "ForkingUDPServer",           "ForkingTCPServer", "ThreadingUDPServer", "ThreadingTCPServer",           "BaseRequestHandler", "StreamRequestHandler",           "DatagramRequestHandler", "ThreadingMixIn", "ForkingMixIn"]if hasattr(socket, "AF_UNIX"):    __all__.extend(["UnixStreamServer","UnixDatagramServer",                    "ThreadingUnixStreamServer",                    "ThreadingUnixDatagramServer"])# poll/select have the advantage of not requiring any extra file descriptor,# contrarily to epoll/kqueue (also, they require a single syscall).if hasattr(selectors, 'PollSelector'):    _ServerSelector = selectors.PollSelectorelse:    _ServerSelector = selectors.SelectSelectorclass BaseServer:    """Base class for server classes.    Methods for the caller:    - __init__(server_address, RequestHandlerClass)    - serve_forever(poll_interval=0.5)    - shutdown()    - handle_request()  # if you do not use serve_forever()    - fileno() -> int   # for selector    Methods that may be overridden:    - server_bind()    - server_activate()    - get_request() -> request, client_address    - handle_timeout()    - verify_request(request, client_address)    - server_close()    - process_request(request, client_address)    - shutdown_request(request)    - close_request(request)    - service_actions()    - handle_error()    Methods for derived classes:    - finish_request(request, client_address)    Class variables that may be overridden by derived classes or    instances:    - timeout    - address_family    - socket_type    - allow_reuse_address    Instance variables:    - RequestHandlerClass    - socket    """    timeout = None    def __init__(self, server_address, RequestHandlerClass):        """Constructor.  May be extended, do not override."""        self.server_address = server_address        self.RequestHandlerClass = RequestHandlerClass        self.__is_shut_down = threading.Event()        self.__shutdown_request = False    def server_activate(self):        """Called by constructor to activate the server.        May be overridden.        """        pass    def serve_forever(self, poll_interval=0.5):        """Handle one request at a time until shutdown.        Polls for shutdown every poll_interval seconds. Ignores        self.timeout. If you need to do periodic tasks, do them in        another thread.        """        self.__is_shut_down.clear()        try:            # XXX: Consider using another file descriptor or connecting to the            # socket to wake this up instead of polling. Polling reduces our            # responsiveness to a shutdown request and wastes cpu at all other            # times.            with _ServerSelector() as selector:                selector.register(self, selectors.EVENT_READ)                while not self.__shutdown_request:                    ready = selector.select(poll_interval)                    if ready:                        self._handle_request_noblock()                    self.service_actions()        finally:            self.__shutdown_request = False            self.__is_shut_down.set()    def shutdown(self):        """Stops the serve_forever loop.        Blocks until the loop has finished. This must be called while        serve_forever() is running in another thread, or it will        deadlock.        """        self.__shutdown_request = True        self.__is_shut_down.wait()    def service_actions(self):        """Called by the serve_forever() loop.        May be overridden by a subclass / Mixin to implement any code that        needs to be run during the loop.        """        pass    # The distinction between handling, getting, processing and finishing a    # request is fairly arbitrary.  Remember:    #    # - handle_request() is the top-level call.  It calls selector.select(),    #   get_request(), verify_request() and process_request()    # - get_request() is different for stream or datagram sockets    # - process_request() is the place that may fork a new process or create a    #   new thread to finish the request    # - finish_request() instantiates the request handler class; this    #   constructor will handle the request all by itself    def handle_request(self):        """Handle one request, possibly blocking.        Respects self.timeout.        """        # Support people who used socket.settimeout() to escape        # handle_request before self.timeout was available.        timeout = self.socket.gettimeout()        if timeout is None:            timeout = self.timeout        elif self.timeout is not None:            timeout = min(timeout, self.timeout)        if timeout is not None:            deadline = time() + timeout        # Wait until a request arrives or the timeout expires - the loop is        # necessary to accommodate early wakeups due to EINTR.        with _ServerSelector() as selector:            selector.register(self, selectors.EVENT_READ)            while True:                ready = selector.select(timeout)                if ready:                    return self._handle_request_noblock()                else:                    if timeout is not None:                        timeout = deadline - time()                        if timeout < 0:                            return self.handle_timeout()    def _handle_request_noblock(self):        """Handle one request, without blocking.        I assume that selector.select() has returned that the socket is        readable before this function was called, so there should be no risk of        blocking in get_request().        """        try:            request, client_address = self.get_request()        except OSError:            return        if self.verify_request(request, client_address):            try:                self.process_request(request, client_address)            except:                self.handle_error(request, client_address)                self.shutdown_request(request)        else:            self.shutdown_request(request)    def handle_timeout(self):        """Called if no new request arrives within self.timeout.        Overridden by ForkingMixIn.        """        pass    def verify_request(self, request, client_address):        """Verify the request.  May be overridden.        Return True if we should proceed with this request.        """        return True    def process_request(self, request, client_address):        """Call finish_request.        Overridden by ForkingMixIn and ThreadingMixIn.        """        self.finish_request(request, client_address)        self.shutdown_request(request)    def server_close(self):        """Called to clean-up the server.        May be overridden.        """        pass    def finish_request(self, request, client_address):        """Finish one request by instantiating RequestHandlerClass."""        self.RequestHandlerClass(request, client_address, self)    def shutdown_request(self, request):        """Called to shutdown and close an individual request."""        self.close_request(request)    def close_request(self, request):        """Called to clean up an individual request."""        pass    def handle_error(self, request, client_address):        """Handle an error gracefully.  May be overridden.        The default is to print a traceback and continue.        """        print('-'*40)        print('Exception happened during processing of request from', end=' ')        print(client_address)        import traceback        traceback.print_exc() # XXX But this goes to stderr!        print('-'*40)class TCPServer(BaseServer):    """Base class for various socket-based server classes.    Defaults to synchronous IP stream (i.e., TCP).    Methods for the caller:    - __init__(server_address, RequestHandlerClass, bind_and_activate=True)    - serve_forever(poll_interval=0.5)    - shutdown()    - handle_request()  # if you don't use serve_forever()    - fileno() -> int   # for selector    Methods that may be overridden:    - server_bind()    - server_activate()    - get_request() -> request, client_address    - handle_timeout()    - verify_request(request, client_address)    - process_request(request, client_address)    - shutdown_request(request)    - close_request(request)    - handle_error()    Methods for derived classes:    - finish_request(request, client_address)    Class variables that may be overridden by derived classes or    instances:    - timeout    - address_family    - socket_type    - request_queue_size (only for stream sockets)    - allow_reuse_address    Instance variables:    - server_address    - RequestHandlerClass    - socket    """    address_family = socket.AF_INET    socket_type = socket.SOCK_STREAM    request_queue_size = 5    allow_reuse_address = False    def __init__(self, server_address, RequestHandlerClass, bind_and_activate=True):        """Constructor.  May be extended, do not override."""        BaseServer.__init__(self, server_address, RequestHandlerClass)        self.socket = socket.socket(self.address_family,                                    self.socket_type)        if bind_and_activate:            try:                self.server_bind()                self.server_activate()            except:                self.server_close()                raise    def server_bind(self):        """Called by constructor to bind the socket.        May be overridden.        """        if self.allow_reuse_address:            self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)        self.socket.bind(self.server_address)        self.server_address = self.socket.getsockname()    def server_activate(self):        """Called by constructor to activate the server.        May be overridden.        """        self.socket.listen(self.request_queue_size)    def server_close(self):        """Called to clean-up the server.        May be overridden.        """        self.socket.close()    def fileno(self):        """Return socket file number.        Interface required by selector.        """        return self.socket.fileno()    def get_request(self):        """Get the request and client address from the socket.        May be overridden.        """        return self.socket.accept()    def shutdown_request(self, request):        """Called to shutdown and close an individual request."""        try:            #explicitly shutdown.  socket.close() merely releases            #the socket and waits for GC to perform the actual close.            request.shutdown(socket.SHUT_WR)        except OSError:            pass #some platforms may raise ENOTCONN here        self.close_request(request)    def close_request(self, request):        """Called to clean up an individual request."""        request.close()class UDPServer(TCPServer):    """UDP server class."""    allow_reuse_address = False    socket_type = socket.SOCK_DGRAM    max_packet_size = 8192    def get_request(self):        data, client_addr = self.socket.recvfrom(self.max_packet_size)        return (data, self.socket), client_addr    def server_activate(self):        # No need to call listen() for UDP.        pass    def shutdown_request(self, request):        # No need to shutdown anything.        self.close_request(request)    def close_request(self, request):        # No need to close anything.        passclass ForkingMixIn:    """Mix-in class to handle each request in a new process."""    timeout = 300    active_children = None    max_children = 40    def collect_children(self):        """Internal routine to wait for children that have exited."""        if self.active_children is None:            return        # If we're above the max number of children, wait and reap them until        # we go back below threshold. Note that we use waitpid(-1) below to be        # able to collect children in size(<defunct children>) syscalls instead        # of size(<children>): the downside is that this might reap children        # which we didn't spawn, which is why we only resort to this when we're        # above max_children.        while len(self.active_children) >= self.max_children:            try:                pid, _ = os.waitpid(-1, 0)                self.active_children.discard(pid)            except ChildProcessError:                # we don't have any children, we're done                self.active_children.clear()            except OSError:                break        # Now reap all defunct children.        for pid in self.active_children.copy():            try:                pid, _ = os.waitpid(pid, os.WNOHANG)                # if the child hasn't exited yet, pid will be 0 and ignored by                # discard() below                self.active_children.discard(pid)            except ChildProcessError:                # someone else reaped it                self.active_children.discard(pid)            except OSError:                pass    def handle_timeout(self):        """Wait for zombies after self.timeout seconds of inactivity.        May be extended, do not override.        """        self.collect_children()    def service_actions(self):        """Collect the zombie child processes regularly in the ForkingMixIn.        service_actions is called in the BaseServer's serve_forver loop.        """        self.collect_children()    def process_request(self, request, client_address):        """Fork a new subprocess to process the request."""        pid = os.fork()        if pid:            # Parent process            if self.active_children is None:                self.active_children = set()            self.active_children.add(pid)            self.close_request(request)            return        else:            # Child process.            # This must never return, hence os._exit()!            try:                self.finish_request(request, client_address)                self.shutdown_request(request)                os._exit(0)            except:                try:                    self.handle_error(request, client_address)                    self.shutdown_request(request)                finally:                    os._exit(1)class ThreadingMixIn:    """Mix-in class to handle each request in a new thread."""    # Decides how threads will act upon termination of the    # main process    daemon_threads = False    def process_request_thread(self, request, client_address):        """Same as in BaseServer but as a thread.        In addition, exception handling is done here.        """        try:            self.finish_request(request, client_address)            self.shutdown_request(request)        except:            self.handle_error(request, client_address)            self.shutdown_request(request)    def process_request(self, request, client_address):        """Start a new thread to process the request."""        t = threading.Thread(target = self.process_request_thread,                             args = (request, client_address))        t.daemon = self.daemon_threads        t.start()class ForkingUDPServer(ForkingMixIn, UDPServer): passclass ForkingTCPServer(ForkingMixIn, TCPServer): passclass ThreadingUDPServer(ThreadingMixIn, UDPServer): passclass ThreadingTCPServer(ThreadingMixIn, TCPServer): passif hasattr(socket, 'AF_UNIX'):    class UnixStreamServer(TCPServer):        address_family = socket.AF_UNIX    class UnixDatagramServer(UDPServer):        address_family = socket.AF_UNIX    class ThreadingUnixStreamServer(ThreadingMixIn, UnixStreamServer): pass    class ThreadingUnixDatagramServer(ThreadingMixIn, UnixDatagramServer): passclass BaseRequestHandler:    """Base class for request handler classes.    This class is instantiated for each request to be handled.  The    constructor sets the instance variables request, client_address    and server, and then calls the handle() method.  To implement a    specific service, all you need to do is to derive a class which    defines a handle() method.    The handle() method can find the request as self.request, the    client address as self.client_address, and the server (in case it    needs access to per-server information) as self.server.  Since a    separate instance is created for each request, the handle() method    can define other arbitrary instance variables.    """    def __init__(self, request, client_address, server):        self.request = request        self.client_address = client_address        self.server = server        self.setup()        try:            self.handle()        finally:            self.finish()    def setup(self):        pass    def handle(self):        pass    def finish(self):        pass# The following two classes make it possible to use the same service# class for stream or datagram servers.# Each class sets up these instance variables:# - rfile: a file object from which receives the request is read# - wfile: a file object to which the reply is written# When the handle() method returns, wfile is flushed properlyclass StreamRequestHandler(BaseRequestHandler):    """Define self.rfile and self.wfile for stream sockets."""    # Default buffer sizes for rfile, wfile.    # We default rfile to buffered because otherwise it could be    # really slow for large data (a getc() call per byte); we make    # wfile unbuffered because (a) often after a write() we want to    # read and we need to flush the line; (b) big writes to unbuffered    # files are typically optimized by stdio even when big reads    # aren't.    rbufsize = -1    wbufsize = 0    # A timeout to apply to the request socket, if not None.    timeout = None    # Disable nagle algorithm for this socket, if True.    # Use only when wbufsize != 0, to avoid small packets.    disable_nagle_algorithm = False    def setup(self):        self.connection = self.request        if self.timeout is not None:            self.connection.settimeout(self.timeout)        if self.disable_nagle_algorithm:            self.connection.setsockopt(socket.IPPROTO_TCP,                                       socket.TCP_NODELAY, True)        self.rfile = self.connection.makefile('rb', self.rbufsize)        self.wfile = self.connection.makefile('wb', self.wbufsize)    def finish(self):        if not self.wfile.closed:            try:                self.wfile.flush()            except socket.error:                # A final socket error may have occurred here, such as                # the local error ECONNABORTED.                pass        self.wfile.close()        self.rfile.close()class DatagramRequestHandler(BaseRequestHandler):    """Define self.rfile and self.wfile for datagram sockets."""    def setup(self):        from io import BytesIO        self.packet, self.socket = self.request        self.rfile = BytesIO(self.packet)        self.wfile = BytesIO()    def finish(self):        self.socket.sendto(self.wfile.getvalue(), self.client_address)

 

    socket的源代码:

# Wrapper module for _socket, providing some additional facilities# implemented in Python."""\This module provides socket operations and some related functions.On Unix, it supports IP (Internet Protocol) and Unix domain sockets.On other systems, it only supports IP. Functions specific for asocket are available as methods of the socket object.Functions:socket() -- create a new socket objectsocketpair() -- create a pair of new socket objects [*]fromfd() -- create a socket object from an open file descriptor [*]fromshare() -- create a socket object from data received from socket.share() [*]gethostname() -- return the current hostnamegethostbyname() -- map a hostname to its IP numbergethostbyaddr() -- map an IP number or hostname to DNS infogetservbyname() -- map a service name and a protocol name to a port numbergetprotobyname() -- map a protocol name (e.g. 'tcp') to a numberntohs(), ntohl() -- convert 16, 32 bit int from network to host byte orderhtons(), htonl() -- convert 16, 32 bit int from host to network byte orderinet_aton() -- convert IP addr string (123.45.67.89) to 32-bit packed formatinet_ntoa() -- convert 32-bit packed format IP to string (123.45.67.89)socket.getdefaulttimeout() -- get the default timeout valuesocket.setdefaulttimeout() -- set the default timeout valuecreate_connection() -- connects to an address, with an optional timeout and                       optional source address. [*] not available on all platforms!Special objects:SocketType -- type object for socket objectserror -- exception raised for I/O errorshas_ipv6 -- boolean value indicating if IPv6 is supportedIntEnum constants:AF_INET, AF_UNIX -- socket domains (first argument to socket() call)SOCK_STREAM, SOCK_DGRAM, SOCK_RAW -- socket types (second argument)Integer constants:Many other constants may be defined; these may be used in calls tothe setsockopt() and getsockopt() methods."""import _socketfrom _socket import *import os, sys, io, selectorsfrom enum import IntEnumtry:    import errnoexcept ImportError:    errno = NoneEBADF = getattr(errno, 'EBADF', 9)EAGAIN = getattr(errno, 'EAGAIN', 11)EWOULDBLOCK = getattr(errno, 'EWOULDBLOCK', 11)__all__ = ["fromfd", "getfqdn", "create_connection",        "AddressFamily", "SocketKind"]__all__.extend(os._get_exports_list(_socket))# Set up the socket.AF_* socket.SOCK_* constants as members of IntEnums for# nicer string representations.# Note that _socket only knows about the integer values. The public interface# in this module understands the enums and translates them back from integers# where needed (e.g. .family property of a socket object).IntEnum._convert(        'AddressFamily',        __name__,        lambda C: C.isupper() and C.startswith('AF_'))IntEnum._convert(        'SocketKind',        __name__,        lambda C: C.isupper() and C.startswith('SOCK_'))_LOCALHOST    = '127.0.0.1'_LOCALHOST_V6 = '::1'def _intenum_converter(value, enum_klass):    """Convert a numeric family value to an IntEnum member.    If it's not a known member, return the numeric value itself.    """    try:        return enum_klass(value)    except ValueError:        return value_realsocket = socket# WSA error codesif sys.platform.lower().startswith("win"):    errorTab = {}    errorTab[10004] = "The operation was interrupted."    errorTab[10009] = "A bad file handle was passed."    errorTab[10013] = "Permission denied."    errorTab[10014] = "A fault occurred on the network??" # WSAEFAULT    errorTab[10022] = "An invalid operation was attempted."    errorTab[10035] = "The socket operation would block"    errorTab[10036] = "A blocking operation is already in progress."    errorTab[10048] = "The network address is in use."    errorTab[10054] = "The connection has been reset."    errorTab[10058] = "The network has been shut down."    errorTab[10060] = "The operation timed out."    errorTab[10061] = "Connection refused."    errorTab[10063] = "The name is too long."    errorTab[10064] = "The host is down."    errorTab[10065] = "The host is unreachable."    __all__.append("errorTab")class _GiveupOnSendfile(Exception): passclass socket(_socket.socket):    """A subclass of _socket.socket adding the makefile() method."""    __slots__ = ["__weakref__", "_io_refs", "_closed"]    def __init__(self, family=AF_INET, type=SOCK_STREAM, proto=0, fileno=None):        # For user code address family and type values are IntEnum members, but        # for the underlying _socket.socket they're just integers. The        # constructor of _socket.socket converts the given argument to an        # integer automatically.        _socket.socket.__init__(self, family, type, proto, fileno)        self._io_refs = 0        self._closed = False    def __enter__(self):        return self    def __exit__(self, *args):        if not self._closed:            self.close()    def __repr__(self):        """Wrap __repr__() to reveal the real class name and socket        address(es).        """        closed = getattr(self, '_closed', False)        s = "<%s.%s%s fd=%i, family=%s, type=%s, proto=%i" \            % (self.__class__.__module__,               self.__class__.__qualname__,               " [closed]" if closed else "",               self.fileno(),               self.family,               self.type,               self.proto)        if not closed:            try:                laddr = self.getsockname()                if laddr:                    s += ", laddr=%s" % str(laddr)            except error:                pass            try:                raddr = self.getpeername()                if raddr:                    s += ", raddr=%s" % str(raddr)            except error:                pass        s += '>'        return s    def __getstate__(self):        raise TypeError("Cannot serialize socket object")    def dup(self):        """dup() -> socket object        Duplicate the socket. Return a new socket object connected to the same        system resource. The new socket is non-inheritable.        """        fd = dup(self.fileno())        sock = self.__class__(self.family, self.type, self.proto, fileno=fd)        sock.settimeout(self.gettimeout())        return sock    def accept(self):        """accept() -> (socket object, address info)        Wait for an incoming connection.  Return a new socket        representing the connection, and the address of the client.        For IP sockets, the address info is a pair (hostaddr, port).        """        fd, addr = self._accept()        # If our type has the SOCK_NONBLOCK flag, we shouldn't pass it onto the        # new socket. We do not currently allow passing SOCK_NONBLOCK to        # accept4, so the returned socket is always blocking.        type = self.type & ~globals().get("SOCK_NONBLOCK", 0)        sock = socket(self.family, type, self.proto, fileno=fd)        # Issue #7995: if no default timeout is set and the listening        # socket had a (non-zero) timeout, force the new socket in blocking        # mode to override platform-specific socket flags inheritance.        if getdefaulttimeout() is None and self.gettimeout():            sock.setblocking(True)        return sock, addr    def makefile(self, mode="r", buffering=None, *,                 encoding=None, errors=None, newline=None):        """makefile(...) -> an I/O stream connected to the socket        The arguments are as for io.open() after the filename, except the only        supported mode values are 'r' (default), 'w' and 'b'.        """        # XXX refactor to share code?        if not set(mode) <= {"r", "w", "b"}:            raise ValueError("invalid mode %r (only r, w, b allowed)" % (mode,))        writing = "w" in mode        reading = "r" in mode or not writing        assert reading or writing        binary = "b" in mode        rawmode = ""        if reading:            rawmode += "r"        if writing:            rawmode += "w"        raw = SocketIO(self, rawmode)        self._io_refs += 1        if buffering is None:            buffering = -1        if buffering < 0:            buffering = io.DEFAULT_BUFFER_SIZE        if buffering == 0:            if not binary:                raise ValueError("unbuffered streams must be binary")            return raw        if reading and writing:            buffer = io.BufferedRWPair(raw, raw, buffering)        elif reading:            buffer = io.BufferedReader(raw, buffering)        else:            assert writing            buffer = io.BufferedWriter(raw, buffering)        if binary:            return buffer        text = io.TextIOWrapper(buffer, encoding, errors, newline)        text.mode = mode        return text    if hasattr(os, 'sendfile'):        def _sendfile_use_sendfile(self, file, offset=0, count=None):            self._check_sendfile_params(file, offset, count)            sockno = self.fileno()            try:                fileno = file.fileno()            except (AttributeError, io.UnsupportedOperation) as err:                raise _GiveupOnSendfile(err)  # not a regular file            try:                fsize = os.fstat(fileno).st_size            except OSError:                raise _GiveupOnSendfile(err)  # not a regular file            if not fsize:                return 0  # empty file            blocksize = fsize if not count else count            timeout = self.gettimeout()            if timeout == 0:                raise ValueError("non-blocking sockets are not supported")            # poll/select have the advantage of not requiring any            # extra file descriptor, contrarily to epoll/kqueue            # (also, they require a single syscall).            if hasattr(selectors, 'PollSelector'):                selector = selectors.PollSelector()            else:                selector = selectors.SelectSelector()            selector.register(sockno, selectors.EVENT_WRITE)            total_sent = 0            # localize variable access to minimize overhead            selector_select = selector.select            os_sendfile = os.sendfile            try:                while True:                    if timeout and not selector_select(timeout):                        raise _socket.timeout('timed out')                    if count:                        blocksize = count - total_sent                        if blocksize <= 0:                            break                    try:                        sent = os_sendfile(sockno, fileno, offset, blocksize)                    except BlockingIOError:                        if not timeout:                            # Block until the socket is ready to send some                            # data; avoids hogging CPU resources.                            selector_select()                        continue                    except OSError as err:                        if total_sent == 0:                            # We can get here for different reasons, the main                            # one being 'file' is not a regular mmap(2)-like                            # file, in which case we'll fall back on using                            # plain send().                            raise _GiveupOnSendfile(err)                        raise err from None                    else:                        if sent == 0:                            break  # EOF                        offset += sent                        total_sent += sent                return total_sent            finally:                if total_sent > 0 and hasattr(file, 'seek'):                    file.seek(offset)    else:        def _sendfile_use_sendfile(self, file, offset=0, count=None):            raise _GiveupOnSendfile(                "os.sendfile() not available on this platform")    def _sendfile_use_send(self, file, offset=0, count=None):        self._check_sendfile_params(file, offset, count)        if self.gettimeout() == 0:            raise ValueError("non-blocking sockets are not supported")        if offset:            file.seek(offset)        blocksize = min(count, 8192) if count else 8192        total_sent = 0        # localize variable access to minimize overhead        file_read = file.read        sock_send = self.send        try:            while True:                if count:                    blocksize = min(count - total_sent, blocksize)                    if blocksize <= 0:                        break                data = memoryview(file_read(blocksize))                if not data:                    break  # EOF                while True:                    try:                        sent = sock_send(data)                    except BlockingIOError:                        continue                    else:                        total_sent += sent                        if sent < len(data):                            data = data[sent:]                        else:                            break            return total_sent        finally:            if total_sent > 0 and hasattr(file, 'seek'):                file.seek(offset + total_sent)    def _check_sendfile_params(self, file, offset, count):        if 'b' not in getattr(file, 'mode', 'b'):            raise ValueError("file should be opened in binary mode")        if not self.type & SOCK_STREAM:            raise ValueError("only SOCK_STREAM type sockets are supported")        if count is not None:            if not isinstance(count, int):                raise TypeError(                    "count must be a positive integer (got {!r})".format(count))            if count <= 0:                raise ValueError(                    "count must be a positive integer (got {!r})".format(count))    def sendfile(self, file, offset=0, count=None):        """sendfile(file[, offset[, count]]) -> sent        Send a file until EOF is reached by using high-performance        os.sendfile() and return the total number of bytes which        were sent.        *file* must be a regular file object opened in binary mode.        If os.sendfile() is not available (e.g. Windows) or file is        not a regular file socket.send() will be used instead.        *offset* tells from where to start reading the file.        If specified, *count* is the total number of bytes to transmit        as opposed to sending the file until EOF is reached.        File position is updated on return or also in case of error in        which case file.tell() can be used to figure out the number of        bytes which were sent.        The socket must be of SOCK_STREAM type.        Non-blocking sockets are not supported.        """        try:            return self._sendfile_use_sendfile(file, offset, count)        except _GiveupOnSendfile:            return self._sendfile_use_send(file, offset, count)    def _decref_socketios(self):        if self._io_refs > 0:            self._io_refs -= 1        if self._closed:            self.close()    def _real_close(self, _ss=_socket.socket):        # This function should not reference any globals. See issue #808164.        _ss.close(self)    def close(self):        # This function should not reference any globals. See issue #808164.        self._closed = True        if self._io_refs <= 0:            self._real_close()    def detach(self):        """detach() -> file descriptor        Close the socket object without closing the underlying file descriptor.        The object cannot be used after this call, but the file descriptor        can be reused for other purposes.  The file descriptor is returned.        """        self._closed = True        return super().detach()    @property    def family(self):        """Read-only access to the address family for this socket.        """        return _intenum_converter(super().family, AddressFamily)    @property    def type(self):        """Read-only access to the socket type.        """        return _intenum_converter(super().type, SocketKind)    if os.name == 'nt':        def get_inheritable(self):            return os.get_handle_inheritable(self.fileno())        def set_inheritable(self, inheritable):            os.set_handle_inheritable(self.fileno(), inheritable)    else:        def get_inheritable(self):            return os.get_inheritable(self.fileno())        def set_inheritable(self, inheritable):            os.set_inheritable(self.fileno(), inheritable)    get_inheritable.__doc__ = "Get the inheritable flag of the socket"    set_inheritable.__doc__ = "Set the inheritable flag of the socket"def fromfd(fd, family, type, proto=0):    """ fromfd(fd, family, type[, proto]) -> socket object    Create a socket object from a duplicate of the given file    descriptor.  The remaining arguments are the same as for socket().    """    nfd = dup(fd)    return socket(family, type, proto, nfd)if hasattr(_socket.socket, "share"):    def fromshare(info):        """ fromshare(info) -> socket object        Create a socket object from the bytes object returned by        socket.share(pid).        """        return socket(0, 0, 0, info)    __all__.append("fromshare")if hasattr(_socket, "socketpair"):    def socketpair(family=None, type=SOCK_STREAM, proto=0):        """socketpair([family[, type[, proto]]]) -> (socket object, socket object)        Create a pair of socket objects from the sockets returned by the platform        socketpair() function.        The arguments are the same as for socket() except the default family is        AF_UNIX if defined on the platform; otherwise, the default is AF_INET.        """        if family is None:            try:                family = AF_UNIX            except NameError:                family = AF_INET        a, b = _socket.socketpair(family, type, proto)        a = socket(family, type, proto, a.detach())        b = socket(family, type, proto, b.detach())        return a, belse:    # Origin: https://gist.github.com/4325783, by Geert Jansen.  Public domain.    def socketpair(family=AF_INET, type=SOCK_STREAM, proto=0):        if family == AF_INET:            host = _LOCALHOST        elif family == AF_INET6:            host = _LOCALHOST_V6        else:            raise ValueError("Only AF_INET and AF_INET6 socket address families "                             "are supported")        if type != SOCK_STREAM:            raise ValueError("Only SOCK_STREAM socket type is supported")        if proto != 0:            raise ValueError("Only protocol zero is supported")        # We create a connected TCP socket. Note the trick with        # setblocking(False) that prevents us from having to create a thread.        lsock = socket(family, type, proto)        try:            lsock.bind((host, 0))            lsock.listen()            # On IPv6, ignore flow_info and scope_id            addr, port = lsock.getsockname()[:2]            csock = socket(family, type, proto)            try:                csock.setblocking(False)                try:                    csock.connect((addr, port))                except (BlockingIOError, InterruptedError):                    pass                csock.setblocking(True)                ssock, _ = lsock.accept()            except:                csock.close()                raise        finally:            lsock.close()        return (ssock, csock)socketpair.__doc__ = """socketpair([family[, type[, proto]]]) -> (socket object, socket object)Create a pair of socket objects from the sockets returned by the platformsocketpair() function.The arguments are the same as for socket() except the default family is AF_UNIXif defined on the platform; otherwise, the default is AF_INET."""_blocking_errnos = { EAGAIN, EWOULDBLOCK }class SocketIO(io.RawIOBase):    """Raw I/O implementation for stream sockets.    This class supports the makefile() method on sockets.  It provides    the raw I/O interface on top of a socket object.    """    # One might wonder why not let FileIO do the job instead.  There are two    # main reasons why FileIO is not adapted:    # - it wouldn't work under Windows (where you can't used read() and    #   write() on a socket handle)    # - it wouldn't work with socket timeouts (FileIO would ignore the    #   timeout and consider the socket non-blocking)    # XXX More docs    def __init__(self, sock, mode):        if mode not in ("r", "w", "rw", "rb", "wb", "rwb"):            raise ValueError("invalid mode: %r" % mode)        io.RawIOBase.__init__(self)        self._sock = sock        if "b" not in mode:            mode += "b"        self._mode = mode        self._reading = "r" in mode        self._writing = "w" in mode        self._timeout_occurred = False    def readinto(self, b):        """Read up to len(b) bytes into the writable buffer *b* and return        the number of bytes read.  If the socket is non-blocking and no bytes        are available, None is returned.        If *b* is non-empty, a 0 return value indicates that the connection        was shutdown at the other end.        """        self._checkClosed()        self._checkReadable()        if self._timeout_occurred:            raise OSError("cannot read from timed out object")        while True:            try:                return self._sock.recv_into(b)            except timeout:                self._timeout_occurred = True                raise            except error as e:                if e.args[0] in _blocking_errnos:                    return None                raise    def write(self, b):        """Write the given bytes or bytearray object *b* to the socket        and return the number of bytes written.  This can be less than        len(b) if not all data could be written.  If the socket is        non-blocking and no bytes could be written None is returned.        """        self._checkClosed()        self._checkWritable()        try:            return self._sock.send(b)        except error as e:            # XXX what about EINTR?            if e.args[0] in _blocking_errnos:                return None            raise    def readable(self):        """True if the SocketIO is open for reading.        """        if self.closed:            raise ValueError("I/O operation on closed socket.")        return self._reading    def writable(self):        """True if the SocketIO is open for writing.        """        if self.closed:            raise ValueError("I/O operation on closed socket.")        return self._writing    def seekable(self):        """True if the SocketIO is open for seeking.        """        if self.closed:            raise ValueError("I/O operation on closed socket.")        return super().seekable()    def fileno(self):        """Return the file descriptor of the underlying socket.        """        self._checkClosed()        return self._sock.fileno()    @property    def name(self):        if not self.closed:            return self.fileno()        else:            return -1    @property    def mode(self):        return self._mode    def close(self):        """Close the SocketIO object.  This doesn't close the underlying        socket, except if all references to it have disappeared.        """        if self.closed:            return        io.RawIOBase.close(self)        self._sock._decref_socketios()        self._sock = Nonedef getfqdn(name=''):    """Get fully qualified domain name from name.    An empty argument is interpreted as meaning the local host.    First the hostname returned by gethostbyaddr() is checked, then    possibly existing aliases. In case no FQDN is available, hostname    from gethostname() is returned.    """    name = name.strip()    if not name or name == '0.0.0.0':        name = gethostname()    try:        hostname, aliases, ipaddrs = gethostbyaddr(name)    except error:        pass    else:        aliases.insert(0, hostname)        for name in aliases:            if '.' in name:                break        else:            name = hostname    return name_GLOBAL_DEFAULT_TIMEOUT = object()def create_connection(address, timeout=_GLOBAL_DEFAULT_TIMEOUT,                      source_address=None):    """Connect to *address* and return the socket object.    Convenience function.  Connect to *address* (a 2-tuple ``(host,    port)``) and return the socket object.  Passing the optional    *timeout* parameter will set the timeout on the socket instance    before attempting to connect.  If no *timeout* is supplied, the    global default timeout setting returned by :func:`getdefaulttimeout`    is used.  If *source_address* is set it must be a tuple of (host, port)    for the socket to bind as a source address before making the connection.    A host of '' or port 0 tells the OS to use the default.    """    host, port = address    err = None    for res in getaddrinfo(host, port, 0, SOCK_STREAM):        af, socktype, proto, canonname, sa = res        sock = None        try:            sock = socket(af, socktype, proto)            if timeout is not _GLOBAL_DEFAULT_TIMEOUT:                sock.settimeout(timeout)            if source_address:                sock.bind(source_address)            sock.connect(sa)            return sock        except error as _:            err = _            if sock is not None:                sock.close()    if err is not None:        raise err    else:        raise error("getaddrinfo returns an empty list")def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0):    """Resolve host and port into list of address info entries.    Translate the host/port argument into a sequence of 5-tuples that contain    all the necessary arguments for creating a socket connected to that service.    host is a domain name, a string representation of an IPv4/v6 address or    None. port is a string service name such as 'http', a numeric port number or    None. By passing None as the value of host and port, you can pass NULL to    the underlying C API.    The family, type and proto arguments can be optionally specified in order to    narrow the list of addresses returned. Passing zero as a value for each of    these arguments selects the full range of results.    """    # We override this function since we want to translate the numeric family    # and socket type values to enum constants.    addrlist = []    for res in _socket.getaddrinfo(host, port, family, type, proto, flags):        af, socktype, proto, canonname, sa = res        addrlist.append((_intenum_converter(af, AddressFamily),                         _intenum_converter(socktype, SocketKind),                         proto, canonname, sa))    return addrlist

    BaseRequestHandler源代码:

class BaseRequestHandler:    """Base class for request handler classes.    This class is instantiated for each request to be handled.  The    constructor sets the instance variables request, client_address    and server, and then calls the handle() method.  To implement a    specific service, all you need to do is to derive a class which    defines a handle() method.    The handle() method can find the request as self.request, the    client address as self.client_address, and the server (in case it    needs access to per-server information) as self.server.  Since a    separate instance is created for each request, the handle() method    can define other arbitrary instance variables.    """    def __init__(self, request, client_address, server):        self.request = request        self.client_address = client_address        self.server = server        self.setup()        try:            self.handle()        finally:            self.finish()    def setup(self):        pass    def handle(self):        pass    def finish(self):        pass

    BaseRequestHandler中,定义了三个方法,setup(),handle()和finish()方法,都是空,让用户自己定义,setup()是处理链接进来之前的,finish()是处理链接进来之后的。

原创粉丝点击