My Jumble of Computer Vision

来源:互联网 发布:奇迹英语软件下载 编辑:程序博客网 时间:2024/05/18 00:48

I am going to maintain this page to record a few things about computer vision that I have read, am doing, or will have a look at. Previously I’d like to write short notes of the papers that I have read. It is a good way to remember and understand the ideas of the authors. But gradually I found that I forget much portion of what I had learnt because in addition to paper I also derive knowledges from others’ blogs, online courses and reports, not recording them at all. Besides, I need a place to keep a list of what I should have a look at but do not at the time when I discover them. This page will be much like a catalog.

Papers and Projects

Object/Saliency Detection

  • PVANET: Deep but Lightweight Neural Networks for Real-time Object Detection (PDF, Project/Code, Reading Note)
  • Inside-Outside Net: Detecting Objects in Context with Skip Pooling and Recurrent Neural Networks (PDF, Reading Note)
  • Object Detection from Video Tubelets with Convolutional Neural Networks (PDF, Reading Note)
  • R-FCN: Object Detection via Region-based Fully Convolutional Networks (PDF, Project/Code, Reading Note)
  • SSD: Single Shot MultiBox Detector (PDF, Project/Code, Reading Note)
  • Pushing the Limits of Deep CNNs for Pedestrian Detection (PDF, Reading Note)
  • Object Detection by Labeling Superpixels(PDF, Reading Note)
  • Crafting GBD-Net for Object Detection (PDF, Projct/Code)
    code for CUImage and CUVideo, the object detection champion of ImageNet 2016.
  • Fused DNN: A deep neural network fusion approach to fast and robust pedestrian detection (PDF, Reading Note)
  • Training Region-based Object Detectors with Online Hard Example Mining (PDF, Reading Note)
  • Detecting People in Artwork with CNNs (PDF, Project/Code)
  • Deeply supervised salient object detection with short connections (PDF)
  • Learning to detect and localize many objects from few examples (PDF)
  • Multi-Scale Saliency Detection using Dictionary Learning (PDF)
  • Straight to Shapes: Real-time Detection of Encoded Shapes (PDF)
  • Weakly Supervised Cascaded Convolutional Networks (PDF, Reading Note)
  • Speed/accuracy trade-offs for modern convolutional object detectors (PDF, Reading Note)
  • Object Detection via End-to-End Integration of Aspect Ratio and Context Aware Part-based Models and Fully Convolutional Networks (PDF)
  • Feature Pyramid Networks for Object Detection (PDF, Reading Note)
  • COCO-Stuff: Thing and Stuff Classes in Context (PDF)
  • Finding Tiny Faces (PDF)
  • Beyond Skip Connections: Top-Down Modulation for Object Detection (PDF, Reading Note)
  • YOLO9000: Better, Faster, Stronger (PDF, Project/Code, Reading Note)
  • SalGAN: Visual Saliency Prediction with Generative Adversarial Networks (PDF, Project/Code)
  • Quantitative Analysis of Automatic Image Cropping Algorithms: A Dataset and Comparative Study (PDF)
  • To Boost or Not to Boost? On the Limits of Boosted Trees for Object Detection (PDF)
  • Pixel Objectness (PDF, Project/Code, Reading Note)
  • DSSD : Deconvolutional Single Shot Detector (PDF, Reading Note)
  • A Fast and Compact Salient Score Regression Network Based on Fully Convolutional Network (PDF)
  • Wide-Residual-Inception Networks for Real-time Object Detection (PDF)
  • Zoom Out-and-In Network with Recursive Training for Object Proposal (PDF, Project/Code)
  • Improving Object Detection with Region Similarity Learning (PDF)
  • Tree-Structured Reinforcement Learning for Sequential Object Localization (PDF)
  • Weakly Supervised Object Localization Using Things and Stuff Transfer (PDF)

Segmentation/Parsing

  • Instance-aware Semantic Segmentation via Multi-task Network Cascades (PDF, Project/Code)
  • ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation (PDF, Reading Note)
  • Learning Deconvolution Network for Semantic Segmentation (PDF, Reading Note)
  • Semantic Object Parsing with Graph LSTM (PDF, Reading Note)
  • Bayesian SegNet: Model Uncertainty in Deep Convolutional Encoder-Decoder Architectures for Scene Understanding (PDF, Reading Note)
  • Learning to Segment Moving Objects in Videos (PDF, Reading Note)
  • Deep Structured Features for Semantic Segmentation (PDF)

    We propose a highly structured neural network architecture for semantic segmentation of images that combines i) a Haar wavelet-based tree-like convolutional neural network (CNN), ii) a random layer realizing a radial basis function kernel approximation, and iii) a linear classifier. While stages i) and ii) are completely pre-specified, only the linear classifier is learned from data. Thanks to its high degree of structure, our architecture has a very small memory footprint and thus fits onto low-power embedded and mobile platforms. We apply the proposed architecture to outdoor scene and aerial image semantic segmentation and show that the accuracy of our architecture is competitive with conventional pixel classification CNNs. Furthermore, we demonstrate that the proposed architecture is data efficient in the sense of matching the accuracy of pixel classification CNNs when trained on a much smaller data set.

  • CNN-aware Binary Map for General Semantic Segmentation (PDF)

  • Learning to Refine Object Segments (PDF)
  • Clockwork Convnets for Video Semantic Segmentation(PDF, Project/Code)
  • Convolutional Gated Recurrent Networks for Video Segmentation (PDF)
  • Efficient Convolutional Neural Network with Binary Quantization Layer (PDF)
  • One-Shot Video Object Segmentation (PDF)
  • Fully Convolutional Instance-aware Semantic Segmentation (PDF, Projcet/Code, Reading Note)
  • Semantic Segmentation using Adversarial Networks (PDF)
  • Full-Resolution Residual Networks for Semantic Segmentation in Street Scenes (PDF)
  • Deep Watershed Transform for Instance Segmentation (PDF)
  • InstanceCut: from Edges to Instances with MultiCut (PDF)
  • The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation (PDF)
  • Improving Fully Convolution Network for Semantic Segmentation (PDF)
  • Video Scene Parsing with Predictive Feature Learning (PDF)
  • Training Bit Fully Convolutional Network for Fast Semantic Segmentation (PDF)
  • Pyramid Scene Parsing Network (PDF, Reading Note)
  • Mining Pixels: Weakly Supervised Semantic Segmentation Using Image Labels (PDF)
  • FastMask: Segment Object Multi-scale Candidates in One Shot (PDF, Project/Code)
  • A New Convolutional Network-in-Network Structure and Its Applications in Skin Detection, Semantic Segmentation, and Artifact Reduction (PDF, Reading Note)
  • FusionSeg: Learning to combine motion and appearance for fully automatic segmention of generic objects in videos (PDF)
  • Visual Saliency Prediction Using a Mixture of Deep Neural Networks (PDF)
  • PixelNet: Representation of the pixels, by the pixels, and for the pixels (PDF, Project/Code)
  • Super-Trajectory for Video Segmentation (PDF)
  • Understanding Convolution for Semantic Segmentation (PDF, Reading Note)
  • Adversarial Examples for Semantic Image Segmentation (PDF)
  • Large Kernel Matters – Improve Semantic Segmentation by Global Convolutional Network (PDF)
  • Deep Image Matting (PDF, Reading Note)
  • Mask R-CNN (PDF)
  • Predicting Deeper into the Future of Semantic Segmentation (PDF)
  • Convolutional Oriented Boundaries: From Image Segmentation to High-Level Tasks (PDF, Project/Code)
  • One-Shot Video Object Segmentation (PDF, Project/Code)

Tracking

  • Spatially Supervised Recurrent Convolutional Neural Networks for Visual Object Tracking (PDF, Reading Note)
  • Joint Tracking and Segmentation of Multiple Targets (PDF, Reading Note)
  • Deep Tracking on the Move: Learning to Track the World from a Moving Vehicle using Recurrent Neural Networks (PDF)
  • Convolutional Regression for Visual Tracking (PDF)
  • Kernelized Correlation Filters(Project CODE1 CODE2)
  • Online Visual Multi-Object Tracking via Labeled Random Finite Set Filtering (PDF)
  • SANet: Structure-Aware Network for Visual Tracking (PDF)
  • Semantic tracking: Single-target tracking with inter-supervised convolutional networks (PDF)
  • On The Stability of Video Detection and Tracking (PDF)
  • Dual Deep Network for Visual Tracking (PDF)
  • Deep Motion Features for Visual Tracking (PDF)
  • Robust and Real-time Deep Tracking Via Multi-Scale Domain Adaptation (PDF, Project/Code)
  • Instance Flow Based Online Multiple Object Tracking (PDF)
  • PathTrack: Fast Trajectory Annotation with Path Supervision (PDF)

Pose Estimation

  • Chained Predictions Using Convolutional Neural Networks (PDF, Reading Note)
  • CRF-CNN: Modeling Structured Information in Human Pose Estimation (PDF)
  • Convolutional Pose Machines (PDF, Project/Code, Reading Note)
  • Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields (PDF, Project/Code, Reading Note)
  • Towards Accurate Multi-person Pose Estimation in the Wild (PDF, Reading Note)

Action Recognition/Event Detection/Video

  • Pooling the Convolutional Layers in Deep ConvNets for Action Recognition (PDF, Reading Note)
  • Two-Stream Convolutional Networks for Action Recognition in Videos (PDF, Reading Note)
  • YouTube-8M: A Large-Scale Video Classification Benchmark (PDF, Project/Code)
  • Spatiotemporal Residual Networks for Video Action Recognition (PDF)
  • An End-to-End Spatio-Temporal Attention Model for Human Action Recognition from Skeleton Data (PDF)
  • Fast Video Classification via Adaptive Cascading of Deep Models (PDF)
  • Video Pixel Networks (PDF)
  • Plug-and-Play CNN for Crowd Motion Analysis: An Application in Abnormal Event Detection (PDF)
  • EM-Based Mixture Models Applied to Video Event Detection (PDF)
  • Video Captioning and Retrieval Models with Semantic Attention (PDF)
  • Title Generation for User Generated Videos (PDF)
  • Review of Action Recognition and Detection Methods (PDF)
  • RECURRENT MIXTURE DENSITY NETWORK FOR SPATIOTEMPORAL VISUAL ATTENTION (PDF)
  • Self-Supervised Video Representation Learning With Odd-One-Out Networks (PDF)
  • Recurrent Memory Addressing for describing videos (PDF)
  • Online Real time Multiple Spatiotemporal Action Localisation and Prediction on a Single Platform (PDF)
  • Real-Time Video Highlights for Yahoo Esports (PDF)
  • Surveillance Video Parsing with Single Frame Supervision (PDF)
  • Anomaly Detection in Video Using Predictive Convolutional Long Short-Term Memory Networks (PDF)
  • Action Recognition with Dynamic Image Networks (PDF)
  • ActionFlowNet: Learning Motion Representation for Action Recognition (PDF)
  • Video Propagation Networks (PDF)
  • Detecting events and key actors in multi-person videos (PDF)
  • A Pursuit of Temporal Accuracy in General Activity Detection (PDF, Reading Note)

Face

  • Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks (PDF, Project/Code)
  • Deep Architectures for Face Attributes (PDF)
  • Face Detection with End-to-End Integration of a ConvNet and a 3D Model (PDF, Reading Note, Project/Code)
  • A CNN Cascade for Landmark Guided Semantic Part Segmentation (PDF, Project/Code)
  • Kernel Selection using Multiple Kernel Learning and Domain Adaptation in Reproducing Kernel Hilbert Space, for Face Recognition under Surveillance Scenario (PDF)
  • An All-In-One Convolutional Neural Network for Face Analysis (PDF)
  • Fast Face-swap Using Convolutional Neural Networks (PDF)
  • Cross-Age Reference Coding for Age-Invariant Face Recognition and Retrieval (Project/Code)
  • CMS-RCNN: Contextual Multi-Scale Region-based CNN for Unconstrained Face Detection (Project/Code)
  • Face Synthesis from Facial Identity Features (PDF)
  • DeepFace: Face Generation using Deep Learning (PDF)
  • Emotion Recognition in the Wild via Convolutional Neural Networks and Mapped Binary Patterns (PDF, Project/Code)
  • EmotioNet Challenge: Recognition of facial expressions of emotion in the wild (PDF)
  • Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation (PDF)

Optical Flow

  • DeepFlow: Large displacement optical flow with deep matching (PDF, Project/Code)
  • Guided Optical Flow Learning (PDF)

Image Processing

  • Learning Recursive Filter for Low-Level Vision via a Hybrid Neural Network (PDF, Project/Code)
  • Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding(PDF, Project/Code)
  • A Learned Representation For Artistic Style(PDF)
  • Let there be Color!: Joint End-to-end Learning of Global and Local Image Priors for Automatic Image Colorization with Simultaneous Classification (PDF, Project/Code)
  • Pixel Recurrent Neural Networks (PDF)
  • Conditional Image Generation with PixelCNN Decoders (PDF, Project/Code)
  • RAISR: Rapid and Accurate Image Super Resolution (PDF)
  • Photo-Quality Evaluation based on Computational Aesthetics: Review of Feature Extraction Techniques (PDF)
  • Fast color transfer from multiple images (PDF)
  • Bringing Impressionism to Life with Neural Style Transfer in Come Swim (PDF)
  • PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications (PDF, (Project/CODE)[https://github.com/openai/pixel-cnn])
  • Deep Photo Style Transfer (PDF)
  • GP-GAN: Towards Realistic High-Resolution Image Blending (PDF, Project/Code)

CNN and Deep Learning

  • UberNet: Training a `Universal’ Convolutional Neural Network for Low-, Mid-, and High-Level Vision using Diverse Datasets and Limited Memory (PDF, Project/Code)
  • What makes ImageNet good for transfer learning? (PDF, Project/Code, Reading Note)

    The tremendous success of features learnt using the ImageNet classification task on a wide range of transfer tasks begs the question: what are the intrinsic properties of the ImageNet dataset that are critical for learning good, general-purpose features? This work provides an empirical investigation of various facets of this question: Is more pre-training data always better? How does feature quality depend on the number of training examples per class? Does adding more object classes improve performance? For the same data budget, how should the data be split into classes? Is fine-grained recognition necessary for learning good features? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class?

  • Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units (PDF)

  • Densely Connected Convolutional Networks (PDF, Project/Code, Reading Note)
  • Decoupled Neural Interfaces using Synthetic Gradients (PDF)

    Training directed neural networks typically requires forward-propagating data through a computation graph, followed by backpropagating error signal, to produce weight updates. All layers, or more generally, modules, of the network are therefore locked, in the sense that they must wait for the remainder of the network to execute forwards and propagate error backwards before they can be updated. In this work we break this constraint by decoupling modules by introducing a model of the future computation of the network graph. These models predict what the result of the modeled sub-graph will produce using only local information. In particular we focus on modeling error gradients: by using the modeled synthetic gradient in place of true backpropagated error gradients we decouple subgraphs, and can update them independently and asynchronously.

  • Rethinking the Inception Architecture for Computer Vision (PDF, Reading Note)

    In this paper, several network designing choices are discussed, including factorizing convolutions into smaller kernels and asymmetric kernels, utility of auxiliary classifiers and reducing grid size using convolution stride rather than pooling.

  • Factorized Convolutional Neural Networks (PDF, Reading Note)

  • Do semantic parts emerge in Convolutional Neural Networks? (PDF, Reading Note)
  • A Critical Review of Recurrent Neural Networks for Sequence Learning (PDF)
  • Image Compression with Neural Networks (Project/Code)
  • Graph Convolutional Networks (Project/Code)
  • Understanding intermediate layers using linear classifier probes (PDF, Reading Note)
  • Learning What and Where to Draw (PDF, Project/Code)
  • On the interplay of network structure and gradient convergence in deep learning (PDF)
  • Deep Learning with Separable Convolutions (PDF)
  • Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization (PDF, Project/Code)
  • Optimization of Convolutional Neural Network using Microcanonical Annealing Algorithm (PDF)
  • Deep Pyramidal Residual Networks (PDF)
  • Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets (PDF)
  • Uncertainty in Deep Learning (PDF, Project/Code)
    This is the PhD Thesis of Yarin Gal.
  • Tensorial Mixture Models (PDF, Project/Code)
  • Multifaceted Feature Visualization: Uncovering the Different Types of Features Learned By Each Neuron in Deep Neural Networks (PDF)
  • Why Deep Neural Networks? (PDF)
  • Local Similarity-Aware Deep Feature Embedding (PDF)
  • A Review of 40 Years of Cognitive Architecture Research: Focus on Perception, Attention, Learning and Applications (PDF)
  • Professor Forcing: A New Algorithm for Training Recurrent Networks (PDF)
  • On the expressive power of deep neural networks(PDF)
  • What Is the Best Practice for CNNs Applied to Visual Instance Retrieval? (PDF)
  • Deep Convolutional Neural Network Design Patterns (PDF, Project/Code)
  • Tricks from Deep Learning (PDF)
  • A Connection between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models (PDF)
  • Multi-Shot Mining Semantic Part Concepts in CNNs (PDF)
  • Aggregated Residual Transformations for Deep Neural Networks (PDF, Reading Note)
  • PolyNet: A Pursuit of Structural Diversity in Very Deep Networks (PDF)
  • On the Exploration of Convolutional Fusion Networks for Visual Recognition (PDF)
  • ResFeats: Residual Network Based Features for Image Classification (PDF)
  • Object Recognition with and without Objects (PDF)
  • LCNN: Lookup-based Convolutional Neural Network (PDF, Reading Note)
  • Inductive Bias of Deep Convolutional Networks through Pooling Geometry (PDF, Project/Code)
  • Wider or Deeper: Revisiting the ResNet Model for Visual Recognition (PDF, Reading Note)
  • Multi-Scale Context Aggregation by Dilated Convolutions (PDF, Project/Code)
  • Large-Margin Softmax Loss for Convolutional Neural Networks (PDF, mxnet Code, Caffe Code)
  • Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics (PDF)
  • Feedback Networks (PDF)
  • Visualizing Residual Networks (PDF)
  • Convolutional Oriented Boundaries: From Image Segmentation to High-Level Tasks (PDF, Project/Code)
  • Understanding trained CNNs by indexing neuron selectivity (PDF)
  • Benchmarking State-of-the-Art Deep Learning Software Tools (PDF, Project/Code)
  • Batch Renormalization: Towards Reducing Minibatch Dependence in Batch-Normalized Models (PDF)
  • Visualizing Deep Neural Network Decisions: Prediction Difference Analysis (PDF, Project/Code)
  • ShaResNet: reducing residual network parameter number by sharing weights (PDF)
  • Deep Forest: Towards An Alternative to Deep Neural Networks (PDF)
  • All You Need is Beyond a Good Init: Exploring Better Solution for Training Extremely Deep Convolutional Neural Networks with Orthonormality and Modulation (PDF)
  • Genetic CNN (PDF)
  • Deformable Convolutional Networks (PDF)
  • Quality Resilient Deep Neural Networks (PDF)
  • How ConvNets model Non-linear Transformations (PDF)
  • Active Convolution: Learning the Shape of Convolution for Image Classification (PDF)
  • Multi-Scale Dense Convolutional Networks for Efficient Prediction (PDF, Project/Code)

GAN

  • Generative Adversarial Networks (PDF)
  • Stacked Generative Adversarial Networks (PDF)
  • Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks (PDF)
  • Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks (PDF)
  • Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks (PDF)
  • NIPS 2016 Tutorial: Generative Adversarial Networks (PDF)
  • Wasserstein GAN (PDF)
  • Adversarial Discriminative Domain Adaptation (PDF, Reading Note)
  • Generative Adversarial Nets with Labeled Data by Activation Maximization (PDF)
  • Triple Generative Adversarial Nets (PDF)
  • On the Quantative Evaluation of Deep Generative Models (PDF)
  • Adversarial Transformation Networks: Learning to Generate Adversarial Examples (PDF)

Machine Learning

  • 计算机视觉与机器学习 【随机森林】
  • 计算机视觉与机器学习 【深度学习中的激活函数】
  • 我爱机器学习 机器学习干货站
  • Bayesian Reasoning and Machine Learning

Embedded

  • Caffeinated FPGAs: FPGA Framework For Convolutional Neural Networks (PDF)
  • Comprehensive Evaluation of OpenCL-based Convolutional Neural Network Accelerators in Xilinx and Altera FPGAs (PDF)
  • FINN: A Framework for Fast, Scalable Binarized Neural Network Inference (PDF)
  • Two-Bit Networks for Deep Learning on Resource-Constrained Embedded Devices (PDF)
  • SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size (PDF, Project/Code)

Other

  • Learning Aligned Cross-Modal Representations from Weakly Aligned Data (PDF, Project/Code)
  • Multi-Task Curriculum Transfer Deep Learning of Clothing Attributes (PDF)
  • End-to-end Learning of Deep Visual Representations for Image Retrieval (PDF)
  • SoundNet: Learning Sound Representations from Unlabeled Video (PDF)
  • Bags of Local Convolutional Features for Scalable Instance Search (PDF, Project/Code)
  • Universal Correspondence Network (PDF, Project/Code)
  • Judging a Book By its Cover (PDF)
  • Generalisation and Sharing in Triplet Convnets for Sketch based Visual Search (PDF)
  • Analysis and Optimization of Loss Functions for Multiclass, Top-k, and Multilabel Classification (PDF)
  • Automatic generation of large-scale handwriting fonts via style learning (PDF)
  • Image Retrieval with Deep Local Features and Attention-based Keypoints (PDF)
  • Visual Discovery at Pinterest (PDF)
  • Learning to Detect Human-Object Interactions (PDF, Project/Code, Reading Note)
  • Learning Deep Features via Congenerous Cosine Loss for Person Recognition (PDF)
  • Large-Scale Evolution of Image Classifiers (PDF)
  • Deep Variation-structured Reinforcement Learning for Visual Relationship and Attribute Detection (PDF)
  • Twitter100k: A Real-world Dataset for Weakly Supervised Cross-Media Retrieval (PDF, Project/Code)

Interesting Finds

Resources/Perspectives

  • arXiv(Computer Vision and Pattern Recognition)
    A good place to explore latest papers.
  • Awesome Computer Vision
    A curated list of awesome computer vision resources.
  • Awesome Deep Vision
    A curated list of deep learning resources for computer vision.
  • Awesome MXNet
    This page contains a curated list of awesome MXnet examples, tutorials and blogs.
  • Awesome TensorFlow
    A curated list of awesome TensorFlow experiments, libraries, and projects.
  • Deep Reinforcement Learning survey
    This paper list is a bit different from others. The author puts some opinion and summary on it. However, to understand the whole paper, you still have to read it by yourself!
  • TensorFlow 官方文档中文版
  • TensorTalk
    A place to find latest work’s codes.
  • OTB Results
    Object tracking benchmark
  • Adversarial Nets Papers
  • Creating Human-Level AI

Projects

  • TensorFlow Examples
    TensorFlow Tutorial with popular machine learning algorithms implementation. This tutorial was designed for easily diving into TensorFlow, through examples.It is suitable for beginners who want to find clear and concise examples about TensorFlow. For readability, the tutorial includes both notebook and code with explanations.
  • TensorFlow Tutorials
    These tutorials are intended for beginners in Deep Learning and TensorFlow. Each tutorial covers a single topic. The source-code is well-documented. There is a YouTube video for each tutorial.
  • Home Surveilance with Facial Recognition
  • Deep Learning algorithms with TensorFlow
    This repository is a collection of various Deep Learning algorithms implemented using the TensorFlow library. This package is intended as a command line utility you can use to quickly train and evaluate popular Deep Learning models and maybe use them as benchmark/baseline in comparison to your custom models/datasets.
  • TensorLayer
    TensorLayer is designed to use by both Researchers and Engineers, it is a transparent library built on the top of Google TensorFlow. It is designed to provide a higher-level API to TensorFlow in order to speed-up experimentations and developments. TensorLayer is easy to be extended and modified. In addition, we provide many examples and tutorials to help you to go through deep learning and reinforcement learning.
  • Easily Create High Quality Object Detectors with Deep Learning
    Using dlib to train a CNN to detect.
  • Command Line Neural Network
    Neuralcli provides a simple command line interface to a python implementation of a simple classification neural network. Neuralcli allows a quick way and easy to get instant feedback on a hypothesis or to play around with one of the most popular concepts in machine learning today.
  • LSTM for Human Activity Recognition
    Human activity recognition using smartphones dataset and an LSTM RNN. The project is based on Tesorflow. A MXNet implementation is MXNET-Scala Human Activity Recognition.
  • YOLO in caffe
    This is a caffe implementation of the YOLO:Real-Time Object Detection.
  • SSD: Single Shot MultiBox Object Detector in mxnet
  • MTCNN face detection and alignment in MXNet
    This is a python/mxnet implementation of Zhang’s work .
  • CNTK Examples: Image/Detection/Fast R-CNN
  • Self Driving (Toy) Ferrari
  • Finding Lane Lines on the Road
  • Magenta
    Magenta is a project from the Google Brain team that asks: Can we use machine learning to create compelling art and music? If so, how? If not, why not?
  • Adversarial Nets Papers
    The classical Papers about adversarial nets
  • Mushreco
    Make a photo of a mushroom and see which species it is. Determine over 200 different species.
  • Neural Enhance
    The neural network is hallucinating details based on its training from example images. It’s not reconstructing your photo exactly as it would have been if it was HD. That’s only possible in Hollywood — but using deep learning as “Creative AI” works and it is just as cool!
  • CNN Models by CVGJ
    This repository contains convolutional neural network (CNN) models trained on ImageNet by Marcel Simon at the Computer Vision Group Jena (CVGJ) using the Caffe framework. Each model is in a separate subfolder and contains everything needed to reproduce the results. This repository focuses currently contains the batch-normalization-variants of AlexNet and VGG19 as well as the training code for Residual Networks (Resnet).
  • YOLO2

    YOLOv2 uses a few tricks to improve training and increase performance. Like Overfeat and SSD we use a fully-convolutional model, but we still train on whole images, not hard negatives. Like Faster R-CNN we adjust priors on bounding boxes instead of predicting the width and height outright. However, we still predict the x and y coordinates directly. The full details are in our paper soon to be released on Arxiv, stay tuned!

  • Lightened CNN for Deep Face Representation
    The Deep Face Representation Experiment is based on Convolution Neural Network to learn a robust feature for face verification task.

  • Recurrent dreams and filling in
  • MTCNN in MXnet
  • openai-gemm

    Open single and half precision gemm implementations. The main speedups over cublas are with small minibatch and in fp16 data formats.

  • Neural Style

    style transfer with mxnet

  • Can Convolutional Neural Networks Crack Sudoku Puzzles?

  • cleverhans

    This repository contains the source code for cleverhans , a Python library to benchmark machine learning systems’ vulnerability to adversarial examples.

  • A deep learning traffic light detector using dlib and a few images from Google street view

  • Paints Chainer
  • Calculate deep convolution neurAl network on Cell Unit
  • Deep Video Analytics
    Deep Video Analytics provides a platform for indexing and extracting information from videos and images. Deep learning detection and recognition algorithms are used for indexing individual frames / images along with detected objects. The goal of Deep Video analytics is to become a quickly customizable platform for developing visual & video analytics applications, while benefiting from seamless integration with state or the art models released by the vision research community.
  • Yolo_mark
    Windows GUI for marking bounded boxes of objects in images for training Yolo v2
  • Yolo-Windows v2 - Windows version of Yolo Convolutional Neural Networks

News/Blogs

  • MIT Technology Review
    A good place to keep up the trends.
  • LAB41
    Lab41 is a Silicon Valley challenge lab where experts from the U.S. Intelligence Community (IC), academia, industry, and In-Q-Tel come together to gain a better understanding of how to work with — and ultimately use — big data.
  • Partnership on AI
    Amazon, DeepMind/Google, Facebook, IBM, and Microsoft announced that they will create a non-profit organization that will work to advance public understanding of artificial intelligence technologies (AI) and formulate best practices on the challenges and opportunities within the field. Academics, non-profits, and specialists in policy and ethics will be invited to join the Board of the organization, named the Partnership on Artificial Intelligence to Benefit People and Society (Partnership on AI).
  • 爱可可-爱生活 老师的推荐十分值得一看
  • Guide to deploying deep-learning inference networks and realtime object recognition tutorial for NVIDIA Jetson TX1
  • A Return to Machine Learning
    This post is aimed at artists and other creative people who are interested in a survey of recent developments in machine learning research that intersect with art and culture. If you’ve been following ML research recently, you might find some of the experiments interesting but will want to skip most of the explanations.
  • ResNets, HighwayNets, and DenseNets, Oh My!
    This post walks through the logic behind three recent deep learning architectures: ResNet, HighwayNet, and DenseNet. Each make it more possible to successfully trainable deep networks by overcoming the limitations of traditional network design.
  • How to build a robot that “sees” with $100 and TensorFlow

    I wanted to build a robot that could recognize objects. Years of experience building computer programs and doing test-driven development have turned me into a menace working on physical projects. In the real world, testing your buggy device can burn down your house, or at least fry your motor and force you to wait a couple of days for replacement parts to arrive.

  • Navigating the unsupervised learning landscape
    Unsupervised learning is the Holy Grail of Deep Learning. The goal of unsupervised learning is to create general systems that can be trained with little data. Very little data.

  • Deconvolution and Checkerboard Artifacts
  • Facial Recognition on a Jetson TX1 in Tensorflow
    Here’s a way to hack facial recognition system together in relatively short time on NVIDIA’s Jetson TX1.
  • Deep Learning with Generative and Generative Adverserial Networks – ICLR 2017 Discoveries
    This blog post gives an overview of Deep Learning with Generative and Adverserial Networks related papers submitted to ICLR 2017.
  • Unsupervised Deep Learning – ICLR 2017 Discoveries
    This blog post gives an overview of papers related to Unsupervised Deep Learning submitted to ICLR 2017.
  • You Only Look Twice — Multi-Scale Object Detection in Satellite Imagery With Convolutional Neural Networks
  • Deep Learning isn’t the brain
  • iSee: Using deep learning to remove eyeglasses from faces
  • Decoding The Thought Vector
  • Algorithmia will help you make your own AI-powered photo filters
  • Deep Learning Enables You to Hide Screen when Your Boss is Approaching
  • 研究|对偶学习:一种新的机器学习范式
  • How to Train a GAN? Tips and tricks to make GANs work

    While research in Generative Adversarial Networks (GANs) continues to improve the fundamental stability of these models, we use a bunch of tricks to train them and make them stable day to day.

  • Highlights of IEEE Big Data 2016: Nearest Neighbours, Outliers and Deep Learning

  • Some CNN visualization tools and techniques

    Besides this post, the others written by the author are also worthy of reading.

  • Deep Learning 2016: The Year in Review
  • GANs will change the world
  • colah’s blog
  • Analysis of Dropout
  • NIPS 2016 Review
  • 【榜单】GitHub 最受欢迎深度学习应用项目 Top 16(持续更新)
  • Why use SVM?
  • TensorFlow Image Recognition on a Raspberry Pi
  • Building Your Own Deep Learning Box
  • Vehicle tracking using a support vector machine vs. YOLO
  • Understanding, generalisation, and transfer learning in deep neural networks
  • NVIDIA Announces The Jetson TX2, Powered By NVIDIA’s “Denver 2” CPU & Pascal Graphics
  • Can FPGAs Beat GPUs in Accelerating Next-Generation Deep Learning?
  • Flexible Image Tagging with Fast0Tag
  • Eye Fidelity: How Deep Learning Will Help Your Smartphone Track Your Gaze
  • Using Deep Learning to Find Similar Dresses
  • Rules of Machine Learning: Best Practices for ML Engineering

Benchmark/Leaderboard/Dataset

  • Visual Tracker Benchmark
    This website contains data and code of the benchmark evaluation of online visual tracking algorithms. Join visual-tracking Google groups for further updates, discussions, or QnAs.
  • Multiple Object Tracking Benchmark
    With this benchmark we would like to pave the way for a unified framework towards more meaningful quantification of multi-target tracking.
  • Leaderboards for the Evaluations on PASCAL VOC Data
  • Open Images dataset
    Open Images is a dataset of ~9 million URLs to images that have been annotated with labels spanning over 6000 categories.
  • Open Sourcing 223GB of Driving Data
    223GB of image frames and log data from 70 minutes of driving in Mountain View on two separate days, with one day being sunny, and the other overcast.
  • MS COCO
  • UMDFaces Dataset
    UMDFaces is a face dataset which has 367,920 faces of 8,501 subjects. From this page you can download the entire dataset and the trained model for predicting the localization of the 21 keypoints.
  • VideoNet
    VideoNet is a new initiative to bring together the community of researchers that have put effort into creating benchmarks for video tasks.
  • YouTube-BoundingBoxes: A Large High-Precision Human-Annotated Data Set for Object Detection in Video
  • KITTI Vision Benchmark Suite
  • Duke: A New Large-scale Person Re-identification Dataset derived from DukeMTMC
    Duke is a subset of the DukeMTMC for image-based re-ID, in the format of the Market-1501 dataset. The original dataset contains 85-minute high-resolution videos from 8 different cameras. Hand-drawn pedestrain bounding boxes are available.

Toolkits

  • Caffe
    Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center (BVLC) and by community contributors. Yangqing Jia created the project during his PhD at UC Berkeley. Caffe is released under the BSD 2-Clause license.
  • Caffe on Intel
    This fork of BVLC/Caffe is dedicated to improving performance of this deep learning framework when running on CPU, in particular Intel® Xeon processors (HSW+) and Intel® Xeon Phi processors
  • TensorFlow
    TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) that flow between them. This flexible architecture lets you deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device without rewriting code. TensorFlow also includes TensorBoard, a data visualization toolkit.
  • MXNet
    MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to mix the flavours of symbolic programming and imperative programming to maximize efficiency and productivity. In its core, a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symbolic execution fast and memory efficient. The library is portable and lightweight, and it scales to multiple GPUs and multiple machines.
  • neon
    neon is Nervana’s Python based Deep Learning framework and achieves the fastest performance on modern deep neural networks such as AlexNet, VGG and GoogLeNet. Designed for ease-of-use and extensibility.
  • Piotr’s Computer Vision Matlab Toolbox
    This toolbox is meant to facilitate the manipulation of images and video in Matlab. Its purpose is to complement, not replace, Matlab’s Image Processing Toolbox, and in fact it requires that the Matlab Image Toolbox be installed. Emphasis has been placed on code efficiency and code reuse. Thanks to everyone who has given me feedback - you’ve helped make this toolbox more useful and easier to use.
  • NVIDIA Developer
  • nvCaffe
    A special branch of caffe is used on TX1 which includes support for FP16.
  • dlib
    Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real world problems. It is used in both industry and academia in a wide range of domains including robotics, embedded devices, mobile phones, and large high performance computing environments. Dlib’s open source licensing allows you to use it in any application, free of charge.
  • OpenCV
    OpenCV is released under a BSD license and hence it’s free for both academic and commercial use. It has C++, C, Python and Java interfaces and supports Windows, Linux, Mac OS, iOS and Android. OpenCV was designed for computational efficiency and with a strong focus on real-time applications.
  • CNNdroid
    CNNdroid is an open source library for execution of trained convolutional neural networks on Android devices.
  • tiny dnn
    tiny-dnn is a C++11 implementation of deep learning. It is suitable for deep learning on limited computational resource, embedded systems and IoT devices.

    An introduction to this toolkit at《Deep learning with C++ - an introduction to tiny-dnn》by Taiga Nomi

  • CaffeMex
    A multi-GPU & memory-reduced MAT-Caffe on LINUX and WINDOWS

Learning/Tricks

  • Backpropagation Algorithm
    A website that explain how Backpropagation Algorithm works.
  • Deep Learning (textbook authored by Ian Goodfellow and Yoshua Bengio and Aaron Courville)
    The Deep Learning textbook is a resource intended to help students and practitioners enter the field of machine learning in general and deep learning in particular.
  • Neural Networks and Deep Learning (online book authored by Michael Nielsen)
    Neural Networks and Deep Learning is a free online book. The book will teach you about 1) Neural networks, a beautiful biologically-inspired programming paradigm which enables a computer to learn from observational data and 2) Deep learning, a powerful set of techniques for learning in neural networks. Neural networks and deep learning currently provide the best solutions to many problems in image recognition, speech recognition, and natural language processing. This book will teach you many of the core concepts behind neural networks and deep learning.
  • Computer Vision: Algorithms and Applications
    This book is largely based on the computer vision courses that Richard Szeliski has co-taught at the University of Washington (2008, 2005, 2001) and Stanford (2003) with Steve Seitz and David Fleet.
  • Must Know Tips/Tricks in Deep Neural Networks
    Many implementation details for DCNNs are collected and concluded. Extensive implementation details are introduced, i.e., tricks or tips, for building and training your own deep networks.
  • The zen of gradient descent
  • Deriving the Gradient for the Backward Pass of Batch Normalization
  • Reinforcement Learning: An Introduction
  • An overview of gradient descent optimization algorithms
  • Regularizing neural networks by penalizing confident predictions
  • What you need to know about data augmentation for machine learning
    Plentiful high-quality data is the key to great machine learning models. But good data doesn’t grow on trees, and that scarcity can impede the development of a model. One way to get around a lack of data is to augment your dataset. Smart approaches to programmatic data augmentation can increase the size of your training set 10-fold or more. Even better, your model will often be more robust (and prevent overfitting) and can even be simpler due to a better training set.
  • [Guide to deploying deep-learning inference networks and realtime object recognition tutorial for NVIDIA Jetson TX1]
  • The Effect of Resolution on Deep Neural Network Image Classification Accuracy
    The author explored the impact of both spatial resolution and training dataset size on the classification performance of deep neural networks in this post.
  • 深度学习调参的技巧
  • CNN怎么调参数
  • 视频多目标跟踪当前(2014,2015,2016)比较好的算法有哪些
  • 5 algorithms to train a neural network
  • Towards Good Practices for Recognition & Detection
    海康威视研究院ImageNet2016竞赛经验分享
  • What are the differences between Random Forest and Gradient Tree Boosting algorithms
  • 为什么现在的CNN模型都是在GoogleNet、VGGNet或者AlexNet上调整的
  • 神经网络与深度学习
  • ILSVRC2016目标检测任务回顾(上)——图像目标检测(DET)
  • ILSVRC2016目标检测任务回顾(下)——视频目标检测(VID)
  • How to Train a GAN? Tips and tricks to make GANs work
  • 令人拍案叫绝的Wasserstein GAN
  • Mathematics for Computer Science
  • 生成式对抗网络 GAN 的研究进展与展望

Skills

About Caffe

  • Set Up Caffe on Ubuntu14.04 64bit+NVIDIA GTX970M+CUDA7.0
  • VS2013配置Caffe卷积神经网络工具(64位Windows 7)——建立工程
  • VS2013配置Caffe卷积神经网络工具(64位Windows 7)——准备依赖库

Setting Up

  • Installation of NVIDIA GPU Driver and CUDA Toolkit
  • Tensorflow v0.10 installed from scratch on Ubuntu 16.04, CUDA 8.0RC+Patch, cuDNN v5.1 with a 1080GTX
  • DL小钢炮攒机心得 | 帮你踩坑
1 0