程序師世界是廣大編程愛好者互助、分享、學習的平台,程序師世界有你更精彩!
首頁
編程語言
C語言|JAVA編程
Python編程
網頁編程
ASP編程|PHP編程
JSP編程
數據庫知識
MYSQL數據庫|SqlServer數據庫
Oracle數據庫|DB2數據庫
您现在的位置: 程式師世界 >> 編程語言 >  >> 更多編程語言 >> Python

Opencv-python_ Basic image operation and processing_ Simple induction of function method

編輯:Python

Content catalog

  • Basic operation of image
  • The image processing
  • edge detection 、 Image contour processing and template matching
  • Image pyramid 、 Histogram
  • Fourier transform and lowpass / High pass filtering

Basic operation of image

Method meaning data fetch : Images Specific meaning imread() Read picture from path imshow() Display image to window waitKey(0/ms) The window shows the waiting time ,0 Indicates manual termination ,ms Indicates that it will terminate in milliseconds destroyAllWindows() Close window operation IMREAD_GRAYSCALE Read grayscale IMREAD_GRAYSCALE Read RGB Tricolor imwrite() Save picture to path data fetch : video Specific meaning VideoCapture(path/0/1) Read video ,path Video path ,0/1 Indicates that the camera vc.isOpened() Check whether the video is read / Check whether the camera is on open, frame = vc.read() Read video by frame ,open The variable indicates whether the read was successful (bool),frame Represents each frame read cvtColor( frame, cv2.COLOR_BGR2GRAY) Move video frames from BGR Image to grayscale vc.release() Release camera , Or close the video file data fetch : Separate or organize channels Specific meaning b,g,r=cv2.split( img ) Separation of images BGR Three channels img=cv2.merge( (b,g,r) ) Organize three single channel images into BGR Color picture cur_img = img.copy() Copy the picture , The result of copying is cur_img Border filling Specific meaning cv2.copyMakeBorder( img, top_size, bottom_size, left_size, right_size, borderType=cv2.BORDER_REPLICATE)BORDER_REPLICATE: Replication , That is to copy the edge pixels . The function is the same as above , Last parameter :borderType=cv2.BORDER_REFLECTBORDER_REFLECT: Reflection , Copy the pixels in the image of interest on both sides The function is the same as above , Last parameter :borderType=cv2.BORDER_REFLECT_101BORDER_REFLECT_101: Reflection , That is, take the most edge pixel as the axis , symmetry The function is the same as above , Last parameter :borderType=cv2.BORDER_WRAPBORDER_WRAP: The outer packing method The function is the same as above , Last parameter :borderType=cv2.BORDER_CONSTANTBORDER_CONSTANT: Constant method Numerical calculation Specific meaning picture + Numbers Increase this number for all pixels of the picture picture + picture Add the corresponding pixel positions of the two pictures , When the sum of pixels is greater than 255 when , Would be right Sum of pixels 256 The remainder of As a result cv2.add( img_cat,img_cat2 ) Add the corresponding pixel positions of the two pictures , When the sum of pixels is greater than 255 when , Value is set to 255img_new = cv2.convertScaleAbs( img ) Take... For the pixels in the picture The absolute value Image fusion Specific meaning img_dog = cv2.resize( img_dog, (500, 414) ) Reset the picture w Values and h value res = cv2.resize( img, (0, 0), fx=4, fy=4 ) stay x Direction or y The direction increases fx,fy Multiple res = cv2.addWeighted( img_cat, 0.5, img_dog, 0.5, 0 ) Image fusion ,img_cat Set weight 0.5,img_dog Set weight 0.5,0 bias

The image processing

Method meaning Threshold processing Specific meaning ret, dst = cv2.threshold(src, thresh, maxval, type)src: Input diagram , Only single channel images can be input , It's usually grayscale ;dst: Output chart ;thresh: threshold ;maxval: When the pixel value exceeds the threshold ( Or less than the threshold , according to type To decide ), The value assigned to .type = cv2.THRESH_BINARY The part exceeding the threshold is taken as maxval( Maximum ), Otherwise take 0. The brightest place becomes the brightest , Dark places become darkest .type = cv2.THRESH_BINARY_INVcv2.THRESH_BINARY Reverse operation of type = cv2.THRESH_TRUNC The part greater than the threshold is set as the threshold , Otherwise unchanged , It is equivalent to the operation of upward truncation type = cv2.THRESH_TOZERO Parts larger than the threshold do not change , Otherwise, it is set to 0, It is equivalent to the operation of downward truncation type = cv2.THRESH_TOZERO_INVTHRESH_TOZERO The reversal of Image smoothing and filtering operation Specific meaning blur = cv2.blur(img, (3, 3)) Simple average convolution operation ,(3,3) It's the convolution kernel box = cv2.boxFilter(img,-1,(3,3), normalize=True) Box filtering , You can choose to normalize Gaussian = cv2.GaussianBlur(img, (5, 5), 1) The value in the convolution kernel of Gaussian filter satisfies Gaussian distribution , It's equivalent to paying more attention to the middle median = cv2.medianBlur(img, 5) It's equivalent to using the median instead of , At present, the effect is the best Image morphology operation ( grayscale ) Specific meaning erosion = cv2.erode(img,kernel,iterations=1) Corrosion operation , The surrounding boundary corrodes towards the central area ,kernel It's a corrosive core ,iterations Indicates the number of corrosion operations dige_dilate = cv2.dilate(dige_erosion,kernel,iterations = 1) Expansion operation , The central area expands to the surrounding area ,kernel It's an expanding core ,iterations Indicates the number of expansion operations opening = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel) Open operation : Corrode first , Re expansion closing = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel) Open operation : Inflate first , Corrode again Image gradient operation Specific meaning gradient = cv2.morphologyEx(img, cv2.MORPH_GRADIENT, kernel) Find gradient operation , gradient = inflation - corrosion ,kernel Is the core of the gradient tophat = cv2.morphologyEx(img, cv2.MORPH_TOPHAT, kernel) Top hat operation is required , formal hat = Raw input - The result of the open operation is ,kernel It's the core of the hat blackhat = cv2.morphologyEx(img,cv2.MORPH_BLACKHAT, kernel) Ask for black hat operation , Black hat = Closed operation result - Raw input ,kernel It's the core of the black hat dst = cv2.Sobel(src, ddepth, dx, dy, ksize)Sobel operator Find gradient .ddepth: The depth of the image , When the image involves negative values , choose cv2.CV_64F This type;dx and dy Horizontal and vertical directions respectively , Only one of the two can be 1 For another 0, Also for 1 The result is bad ;ksize yes Sobel The size of the operator , Basically select ( Odd number , Odd number ) type dst = cv2.Scharr(src, ddepth, dx, dy, ksize)Scharr operator Find gradient .ddepth: The depth of the image , When the image involves negative values , choose cv2.CV_64F This type;dx and dy Horizontal and vertical directions respectively , Only one of the two can be 1 For another 0, Also for 1 The result is bad ;ksize yes SCharr The size of the operator , Basically select ( Odd number , Odd number ) type . And Sobel operator similar , It's just Scharr operator The values of are quite different .laplacian = cv2.Laplacian(src, ddepth)Laplace operator Find gradient .ddepth: The depth of the image , When the image involves negative values , choose cv2.CV_64F This type. Basic drawing Specific meaning img = cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2) On the picture Draw a rectangle .img: Picture of operation ;(x,y): Starting point of rectangle drawing ;(x+w,y+h): Hold the drawn end ;(0,255,0): Color of rectangle drawing ;2: Line thickness of rectangular drawing .img = cv2.circle(img,center,radius,(0,255,0),2) On the picture Draw a circle .img: Picture of operation ;center: The coordinates of the center of the circle ;radius: Radius length ;(0,255,0): Color of rectangle drawing ;2: Line thickness of rectangular drawing .

edge detection 、 Image contour processing and template matching

Method meaning Canny edge detection Meaning of method v1=cv2.Canny(img, minValue, maxValue)Canny edge detection .(1) Use Gauss filter ( Then the kernel normalization operation is performed ), To smooth the image , Filter out noise .(2) Calculate the value of each pixel in the image Gradient intensity (Sobel/Scharr) and Direction ( seek a r c t a n ( d y d x ) \mathbf{arctan}(\frac{\mathbf{d}y}{\mathbf{d}x}) arctan(dxdy​)).(3) application Non maximum (Non-Maximum Suppression) Inhibition , To eliminate the spurious response brought by edge detection . The way to do it is Linear interpolation and Eight direction method . Through these two methods, the magnitude of two gradients before and after a gradient line is calculated , The magnitude of the two gradients after the current one is smaller than that of the middle one , So this gradient is the gradient .(4) application Double threshold (Double-Threshold) Detection to determine real and potential edges . When the gradient value is greater than maxValue, It is treated as boundary ; When the gradient value is at (minValue,maxValue) Between , Those connected with the boundary are reserved , If there is no connection with the boundary, it is discarded ; When the gradient value is less than minValue, Must give up .(5) Finally, the edge detection is completed by suppressing the isolated weak edge .minValue, maxValue Selected value Both are higher , It is equivalent to strict edge detection , There are fewer edges to the results ; Both are lower , It is equivalent to that the edge detection is not strict , There are many edges of the results Contour detection Meaning of method binary, contours, hierarchy = cv2.findContours(img,mode,method) The way to find the edge .img: Input picture ;mode: Mode of edge detection ;method: The method of edge detection .【 Be careful 】 Need to put BGR The image is converted to a grayscale image , After threshold processing ( Method :cv2.THRESH_BINARY).mode = RETR_EXTERNAL Retrieve only Outermost The outline of mode = RETR_LIST Retrieve all the contours , And save it to A linked list among mode = RETR_CCOMP Retrieve all the contours , And organize them into two layers : The top layer is for each part External boundary , The second level is Empty boundaries mode = RETR_TREE Retrieve all the contours , and Reconstruct the entire hierarchy of nested profiles ( Commonly used )method = CHAIN_APPROX_NONE With Freeman Chain code To output the outline , All other methods Output polygons ( The sequence of vertices ).method = CHAIN_APPROX_SIMPLE Compressed horizontal 、 The vertical and oblique parts , That is to say , Functions keep only the end part of them .( Commonly used )return:contours The outline itself return:hierarchy Attributes corresponding to each contour cv2.drawContours(img, contours, -1, (0, 0, 255), 2) How to draw edges .img: Original picture , That is to draw edges on the original drawing ;contours: Variables that store edge data ;-1: Draw all edges ,-1 Change to 1/2/3 When, it means drawing the 1/2/3 edge ;(0, 0, 255): Draw the edge of BGR Color ( The current color is pure red );2: Draw the line thickness of the edge .cv2.contourArea(cnt) Calculate the contour area .cnt: Variable contours One of them ,contours Itself is a list .cv2.arcLength(cnt,True) Calculate the contour Perimeter .cnt: Variable contours One of them ,contours Itself is a list .True: Indicates a closed contour ;False Represents an open contour .approx = cv2.approxPolyDP(cnt,epsilon,True) Handle The outline is approximate . Purpose : Sometimes you don't need a precise outline .cnt: Variable contours One of them ,contours Itself is a list .epsilon: Parameters , Set to the parameter times of contour perimeter , Parameters should be adjusted by yourself . The third parameter does not specify that it defaults to True.return:approx What is returned is also an outline , Also need to use drawContours Method to draw .x,y,w,h = cv2.boundingRect(cnt) draw Rectangle enclosing the outline .cnt: Variable contours One of them ,contours Itself is a list .return:x,y Draw the coordinate starting point of the rectangle (x,y) .return:w,h Draw the width of the rectangle (w: width) And high (h: height).(x,y),radius = cv2.minEnclosingCircle(cnt) draw A circle enclosing the outline .cnt: Variable contours One of them ,contours Itself is a list .return:(x,y) Draw the coordinate starting point of the circle (x,y) .return:radius Draw the radius of the circle . Template matching Meaning of method res = cv2.matchTemplate(img, template, type) Template matching Method .img: The original image ;template: Template image ;type: The method of template matching .type = cv2.TM_SQDIFF Calculation square Different , The greater the calculated value Small , The more relevant type = cv2.TM_CCORR Calculation The correlation , The greater the calculated value Big , The more relevant type = cv2.TM_CCOEFF Calculation The correlation coefficient , The greater the calculated value Big , The more relevant type = cv2.TM_SQDIFF_NORMED Calculation Normalized square Different , The calculated value The closer the 0, The more relevant ( Be sure )type = TM_CCORR_NORMED Calculation Normalized correlation , The calculated value The closer the 1, The more relevant ( Be sure )type = TM_CCOEFF_NORMED Calculation Normalized correlation coefficient , The closer the calculated value is to 1, The more relevant ( Be sure )return:res If the original figure is A × B A \times B A×B size , And the template is a × b a \times b a×b size , Then the output matrix is ( A − a + 1 ) × ( B − b + 1 ) (A-a+1) \times (B-b+1) (A−a+1)×(B−b+1) .min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res) seek Best match / Mismatch Template method .res: use matchTemplate Generated two-dimensional matching value array .return:min_val,max_val The minimum and maximum values of the matching degree .return:min_loc, max_loc The corresponding position when the matching degree reaches the minimum and maximum values .

Image pyramid 、 Histogram

project Content Why pyramid generation ? Picture enlargement / The extracted features may be different after zooming out down=cv2.pyrDown(img) The pyramid of Gauss : Down sampling method . First, convolute the image with Gaussian kernel , Then reduce even rows and even columns up=cv2.pyrUp(img) The pyramid of Gauss : Upward sampling method . First, double the size of the image in each direction , New rows and columns with 0 fill ; Use the same kernel as before ( multiply 4) Convolute the expanded image .down=cv2.pyrDown(img) // down_up=img - cv2.pyrUp(down) The pyramid of Laplace : First, sample the image downward , Then sample the image upwards , Finally, subtract the generated graph from the original graph , You can get the first floor of the Laplace pyramid . Why do I need to generate histogram ? View the pictures 0~255 Distribution of each pixel value , It is conducive to the later equilibrium res = cv2.calcHist(images,channels,mask,histSize,ranges) Used to generate Histogram Methods .images: The original image format is uint8 or float32. When passing in a function, brackets are applied .channels: Also enclosed in brackets, it will tell us the histogram of the image passageway . If the input image is a grayscale image, its value is 0+ brackets ; If it's a color image , The parameter passed in can be 0/1/2+ brackets , They correspond to B/G/R.mask: mask image . The histogram of the whole image is calculated as None. But I want to count the histogram of some part of the image , Just make a mask image and use it .histSize:BIN Number of , Brackets should also be applied .ranges: The range of pixel values is usually [0~256].return:res every last bin Corresponding statistical results .plt.hist(img.ravel(),256); Call the method that generates the histogram . Pay attention to use img Of ravel() Method .mask = np.zeros(img.shape[:2], np.uint8) mask[100:300, 100:400] = 255 Generate mask Methods . with img alike shape, The data type of each pixel is unsigned 8 position , Its value range is just 0~255. Yes mask To assign values to some areas of , assignment 255. When using , take calcHist Of None Directly into mask Variable can .masked_img = cv2.bitwise_and(img, img, mask=mask) And operation . take img Image and mask Mask image by pixel and operation .equ = cv2.equalizeHist(img) On the image equalization operation .return:equ What is returned is the equilibrium picture .clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8)) Create a Adaptive equalization example .clipLimit: Default parameters ;tileGridSize: Adaptive equalization “ Small mask ”, It can also be said to be a window .res_clahe = clahe.apply(img) The method of adaptive equalization instance acting on the picture .return:res_clahe The returned image is the result of adaptive equalization .

Fourier transform and lowpass / High pass filtering

Method meaning The role of Fourier transform ? High frequency information and low frequency information can be detected . high frequency : The grayscale components that change dramatically ; Low frequency : Slowly changing gray components . The meaning of Fourier transform ? Detect high frequency on the original image / Low frequency information is difficult , The conversion to frequency domain is simple . low pass filter Keep only the low frequencies , It will blur the image High pass filter Keep only the high frequencies , Will enhance the image details img_float32 = np.float32(img) Before the frequency domain transformation, the data type needs to be converted to float32 type dft = cv2.dft(img_float32, flags = cv2.DFT_COMPLEX_OUTPUT) The method of Fourier transform for image .img_float32 It's the input image ;flags=cv2.DFT_COMPLEX_OUTPUT It refers to the Fourier transform output complex number , This is the default .return:dft The return value is a spectrum . Low frequency information is mainly distributed in the upper right corner .dft_shift = np.fft.fftshift(dft) Move the low-frequency information from the upper right corner to the image center .magnitude_spectrum = 20*np.log(cv2.magnitude(dft_shift[:,:,0],dft_shift[:,:,1])) Convert the spectrum into the form of a visible picture . Fixed format .rows, cols = img.shape \ crow, ccol = int(rows/2) , int(cols/2) \ mask = np.zeros((rows, cols, 2), np.uint8) \ mask[crow-30:crow+30, ccol-30:ccol+30] = 1 Low pass filter manufacturing process . In the picture is actually making a mask .fshift = dft_shift*mask The combination of spectrum and mask only needs to be done Multiplication that will do .f_ishift = np.fft.ifftshift(fshift) Transfer the spectrum image from the center to the upper left corner .img_back = cv2.idft(f_ishift) Inverse Fourier transform , Transfer spectrum information into picture information .img_back = cv2.magnitude(img_back[:,:,0],img_back[:,:,1]) Finally, the picture is combined into the form of a visible picture .
  1. 上一篇文章:
  2. 下一篇文章:
Copyright © 程式師世界 All Rights Reserved