face01lib package

Subpackages

Submodules

face01lib.Calc module

A module that performs various calculations.

Calculation results are output to log

class face01lib.Calc.Cal(log_level: str = 'info')[ソース]

ベースクラス: object

Cal class include various calculation methods.

HANDLING_FRAME_TIME: float
HANDLING_FRAME_TIME_FRONT: float
HANDLING_FRAME_TIME_REAR: float
Measure_func(func)[ソース]

Used as a decorator to time a function.

static Measure_processing_time(HANDLING_FRAME_TIME_FRONT, HANDLING_FRAME_TIME_REAR) float[ソース]

Measurement of processing time (calculation) and output to log.

パラメータ:
  • HANDLING_FRAME_TIME_FRONT (float) -- First half point

  • HANDLING_FRAME_TIME_REAR (float) -- Second half point

static Measure_processing_time_backward() float[ソース]

Measurement of processing time (second half).

戻り値:

Second half point

戻り値の型:

float

static Measure_processing_time_forward() float[ソース]

Measurement of processing time (first half).

戻り値:

First half point

戻り値の型:

float

cal_resized_logo_image(resized_logo_image: ndarray[Any, dtype[float64]], set_height: int, set_width: int) Tuple[int, int, int, int, ndarray[Any, dtype[float64]], ndarray[Any, dtype[float64]]][ソース]

Calculate logo image data.

パラメータ:
  • resized_logo_image (npt.NDArray[np.float64]) -- Resized logo image data

  • set_height (int) -- Height

  • set_width (int) -- Width

戻り値:

Return tuple

戻り値の型:

Tuple[int,int,int,int,npt.NDArray[np.float64],npt.NDArray[np.float64]]

サンプル

>>> cal_resized_logo_nums = Cal().cal_resized_logo_image(
        resized_logo_image,
        set_height,
        set_width
    )
cal_resized_telop_image(resized_telop_image: ndarray[Any, dtype[float64]]) Tuple[int, int, int, int, ndarray[Any, dtype[float64]], ndarray[Any, dtype[float64]]][ソース]

Calculate telop image data.

パラメータ:

resized_telop_image (npt.NDArray[np.float64]) -- Resized telop image data

戻り値:

Tuple

戻り値の型:

Tuple[int,int,int,int,npt.NDArray[np.float64],npt.NDArray[np.float64]]

サンプル

>>> cal_resized_telop_nums = Cal().cal_resized_telop_image(resized_telop_image)
cal_specify_date(logger) None[ソース]

終了期間を指定します

Summary:

プログラム使用制限を日付で指定します 呼び出されるモジュール内にこのメソッドを呼び出すことで、モジュールを使用不可にします。 アプリケーションを期限付きで運用する場合に有用です。

one point

このメソッドを使用する場合は'Cython'でバイナリ化しておくと良いですね⭐️''

decide_text_position(error_messg_rectangle_bottom, error_messg_rectangle_left, error_messg_rectangle_right, error_messg_rectangle_fontsize, error_messg_rectangle_messg)[ソース]

Not use.

make_draw_rgb_object(pil_img_obj_rgb)[ソース]

Generate object.

パラメータ:

pil_img_obj_rgb (object) -- object

戻り値:

object

戻り値の型:

object

make_error_messg_rectangle_font(fontpath: str, error_messg_rectangle_fontsize: str, encoding='utf-8')[ソース]

Not use.

pil_img_instance(frame: ndarray[Any, dtype[uint8]])[ソース]

Generate pil_img object.

パラメータ:

frame (npt.NDArray[np.uint8]) -- Image data

戻り値:

PIL object

戻り値の型:

object

return_percentage(distance: float, deep_learning_model: int) float[ソース]

Receive 'distance' and return percentage.

パラメータ:
  • distance (float) -- distance

  • deep_learning_model (int) -- deep_learning_model

戻り値:

percentage

戻り値の型:

float

注釈

deep_learning_model:

0:dlib_face_recognition_resnet_model_v1.dat, 1:JAPANESE_FACE_V1.onnx, 2:mobilefacenet.onnx(実装未定)

サンプル

>>> percentage = Cal().return_percentage(distance, deep_learning_model)
to_percentage(tolerance: float, deep_learning_model: int) float[ソース]

Receive 'tolerance' and return 'percentage'.

パラメータ:

tolerance (float) -- tolerance

戻り値:

percentage

戻り値の型:

float

to_tolerance(similar_percentage: float, deep_learning_model: int) float[ソース]

similar_percentageを受け取ってtoleranceを返します.

パラメータ:
  • similar_percentage (float) -- config.iniで設定されたProbability of similarityの値

  • deep_learning_model (int) -- Deep learning modelを選びます.

戻り値:

tolerance

戻り値の型:

float

注釈

deep_learning_model:

0:dlib_face_recognition_resnet_model_v1.dat, 1:JAPANESE_FACE_V1.onnx, 2:mobilefacenet.onnx

サンプル

>>> tolerance: float = Cal().to_tolerance(
        self.CONFIG["similar_percentage"],
        self.CONFIG["deep_learning_model"]
    )
Dlib

## 算出式 ## percentage = -4.76190475*(p*p)+(-0.380952375)*p+100 ## percentage_example = -4.76190475*(0.45*0.45)+(-0.380952375)*0.45+100 ## -4.76190475*(p*p)+(-0.380952375)*p+(100-similar_percentage) = 0 # 式の変更: 2023年9月14日 # # f(x) = 100 / (1 + exp(-10(x - 0.8))) # similar_percentage = 100 / (1 + np.exp(-10*(tolerance - 0.8)))

JAPANESE_FACE_V1

## 算出式 ## y=-23.71x2+49.98x+73.69 ## See: https://zenn.dev/ykesamaru/articles/bc74ec27925896#%E9%96%BE%E5%80%A4%E3%81%A8%E7%99%BE%E5%88%86%E7%8E%87 ## percentage = -23.71*(p*p)+49.98*p+73.69 ## 0 = -23.71*(p*p)+49.98*p+(73.69-similar_percentage)

x1: int
x2: int
y1: int
y2: int

face01lib.Core module

Reference.

class face01lib.Core.Core(log_level: str = 'info')[ソース]

ベースクラス: object

Coreクラス.

このクラスは多くの有用なメソッドを含みます。

common_process(CONFIG: Dict) Generator[ソース]

Generator of frame_datas_array.

common_process function consists 3 part of Core() methods.

  1. Core().frame_pre_processing

  2. Core().face_encoding_process

  3. Core().frame_post_processing

列挙:

Generator --

frame_datas_array
  • frame_datas_array: List[Dict]

サンプル

Make generator object

>>> obj = Core().common_process(CONFIG)

Call '__next__()' method

>>> while True:
    frame_datas_array = obj.__next__()
face_encoding_process(logger, CONFIG: Dict, frame_datas_array: List[Dict]) Tuple[List[ndarray[Any, dtype[float64]]], List[Dict]][ソース]

Encode face data and Return bellow.

  • list of encoded data

  • frame_datas_array

パラメータ:
  • logger (_type_) -- logger

  • CONFIG (Dict) -- Dict including initial setting

  • frame_datas_array (List[Dict]) -- frame datas

戻り値:

  • list of encoded data
    • Tuple[List[npt.NDArray[np.float64]]

  • frame_datas_array
    • List[Dict]]

Definition of person_data_list and frame_datas_array:
>>> person_data = {
        'name': name,
        'pict':filename,
        'date':date,
        'location':(top,right,bottom,left),
        'percentage_and_symbol': percentage_and_symbol
    }
>>> person_data_list.append(person_data)
>>> frame_datas = {
        'img': resized_frame,
        'face_location_list': face_location_list,
        'overlay': overlay,
        'person_data_list': person_data_list
    }
>>> frame_datas_array.append(frame_datas)
frame_post_processing(logger, CONFIG: Dict, face_encodings: List[ndarray[Any, dtype[float64]]], frame_datas_array: List[Dict]) List[Dict][ソース]

Modify each frame's datas.

パラメータ:
  • logger (_type_) -- logger

  • CONFIG (Dict) -- Dict of initial settings

  • face_encodings (List[npt.NDArray[np.float64]]) -- List of encoded face datas

  • frame_datas_array (List[Dict]) -- List of datas

戻り値:

List of modified datas

戻り値の型:

List[Dict]

Modify:
  • Composite images
    • Default face image

    • Rectangle

    • Percentage

    • Name

  • Percentage calculation

  • Save cropped face images

  • Make person_data_list

  • Make frame_datas_list

frame_pre_processing(logger, CONFIG: Dict, resized_frame: ndarray[Any, dtype[uint8]]) List[Dict][ソース]

Return frame_datas_array.

パラメータ:
  • logger (_type_) -- logger

  • CONFIG (Dict) -- Dict of Initial settings

  • resized_frame (npt.NDArray[np.uint8]) -- Resized frame

戻り値:

frame_datas_array

戻り値の型:

List[Dict]

Processing:
  • If CONFIG["headless"] is False
    • copy frame data for overlay

    • composite telop and logo

  • Compute face coordinates

  • Make frame_datas
    • img

    • face_location_list

    • overlay
      • If CONFIG["headless"] is True, insert dummy data
        • np.empty((2,2,3), dtype=np.uint8)

    • person_data_list

  • Make frame_datas_array
    • frame_datas_array.append(frame_datas)

Data structure of frame_datas_array:
  • overlay (npt.NDArray[np.uint8])

  • face_location_list (List[Tuple[int,int,int,int]])

  • name (str)

  • filename (str)

  • percentage_and_symbol (str)

  • person_data_list (List)

  • frame_datas_array (List[Dict])

  • resized_frame (npt.NDArray[np.uint8])

See Core.make_frame_datas_array()

サンプル

>>> exec_times = 2
# Initialize
CONFIG: Dict =  Initialize('FACE-COORDINATE', 'info').initialize()
# Make generator
frame_generator_obj = VidCap().frame_generator(CONFIG)
# Make logger
import os.path
dir: str = os.path.dirname(__file__)
log = Logger().logger(__file__, dir)
# Make generator
core = Core()
# Repeat 'exec_times' times
for i in range(0, exec_times):
    # Call __next__() from the generator object
    resized_frame = frame_generator_obj.__next__()
    frame_datas_array = core.frame_pre_processing(log, CONFIG,resized_frame)
make_frame_datas_array(overlay: ndarray[Any, dtype[uint8]], face_location_list: List[Tuple[int, int, int, int]], name: str, filename: str, percentage_and_symbol: str, person_data_list: List, frame_datas_array: List[Dict], resized_frame: ndarray[Any, dtype[uint8]]) List[Dict][ソース]

Method to make frame_datas_array.

Return the data structure of frame_datas_list.

パラメータ:
  • overlay (npt.NDArray[np.uint8]) -- Copy of frame

  • face_location_list (List[Tuple[int,int,int,int]]) -- List of face_location

  • name (str) -- Each of person's name

  • filename (str) -- File name

  • percentage_and_symbol (str) -- Concatenate of digit of percentage and '%' symbol

  • person_data_list (List) -- List of person_data

  • frame_datas_array (List[Dict]) -- Array of frame_datas

  • resized_frame (npt.NDArray[np.uint8]) -- Numpy array of frame

戻り値:

List of frame_datas_array

戻り値の型:

List[Dict]

サンプル

person_data

>>> {
    'name': name,
    'pict': filename,
    'date': date,
    'location': (top,right,bottom,left),
    'percentage_and_symbol': percentage_and_symbol
}

person_data_list

>>> person_data_list.append(person_data)

frame_datas

>>> {
    'img': resized_frame,
    'face_location_list': face_location_list,
    'overlay': overlay,
    'person_data_list': person_data_list
}

frame_datas_list

>>> frame_datas_array.append(frame_datas)
mp_face_detection_func(resized_frame: ndarray[Any, dtype[uint8]], model_selection: int = 0, min_detection_confidence: float = 0.4)[ソース]

RGBイメージを処理し、mediapipeを使って算出したface locationデータを返します.

パラメータ:
  • resized_frame (npt.NDArray[np.uint8]) -- Resized image frame

  • model_selection (int, optional) -- Value set in config.ini. Defaults to 0.

  • min_detection_confidence (float, optional) -- Value set in config.ini. Defaults to 0.4.

戻り値:

A NamedTuple object with a "detections" field that contains a list of the detected face location data.

Refer:

https://solutions.mediapipe.dev/face_detection#python-solution-api

注釈

number_of_people and same_time_recognize in config.ini are disabled when using mp_face_detection_func.

override_args_dict(CONFIG: Dict, override_list: List[Tuple]) Dict[ソース]

Override CONFIG for example.

パラメータ:
  • Dict -- CONFIG

  • List[Tuple] -- override_list

戻り値:

CONFIG

戻り値の型:

Dict

サンプル

>>> CONFIG = Core.override_args_dict(
    CONFIG,
    [
        ('crop_face_image', False),
        ('output_debug_log', True)
    ]
)

注釈

  • THIS METHOD IS EXPERIMENTAL
    • Unexpected side effects may occur.

  • If you specified key is not exist, application will fail down

    with print out log 'warning'.

  • You cannot change key 'headless'. If override it, application will fall down.

r_face_image(frame: ndarray[Any, dtype[uint8]], face_location: Tuple[int, int, int, int]) ndarray[Any, dtype[uint8]][ソース]

Return face image which expressed for ndarray.

パラメータ:
  • frame (npt.NDArray[np.uint8]) -- Frame image

  • face_location (Tuple[int,int,int,int]) -- Face location (coordinate)

戻り値:

Face image which expressed for ndarray

戻り値の型:

npt.NDArray[np.uint8]

return_anti_spoof(frame: ndarray[Any, dtype[uint8]], face_location: Tuple[int, int, int, int]) Tuple[str, float, bool][ソース]

Return result of anti spoof.

注釈

This function is EXPERIMENTAL! It might occur side effects.

パラメータ:
  • frame (npt.NDArray[np.uint8]) -- Each of frame

  • face_location (Tuple[int,int,int,int]) -- Face location

戻り値:

  • spoof_or_real

  • score

  • ELE
    • Equally Likely Events

戻り値の型:

Tuple[str, float, bool]

return_concatenate_location_and_frame(resized_frame: ndarray[Any, dtype[uint8]], face_location_list: List[Tuple[int, int, int, int]]) Tuple[List[Tuple[int, int, int, int]], ndarray[Any, dtype[uint8]]][ソース]

Return tuple.

  • concatenate_face_location_list

  • concatenate_person_frame

パラメータ:
  • resized_frame (npt.NDArray[np.uint8]) -- Resized frame

  • face_location_list (List[Tuple[int,int,int,int]]) -- Face location list

戻り値:

concatenate_face_location_list (Tuple[List[Tuple[int,int,int,int]])

List of concatenated coordinates

concatenate_person_frame (npt.NDArray[np.uint8]])

Image data of concatenated person image data

face01lib.Initialize module

config.ini を読み込み、CONFIG 辞書データを返します.

この関数は、ConfigParser モジュールを使用して 'config.ini' ファイルを読み込み、その内容を辞書として返します。

returns:

設定データを含む CONFIG 辞書。

rtype:

dict

注釈

config.ini ファイルについては以下を参照してください: https://github.com/yKesamaru/FACE01_DEV/blob/master/docs/config_ini.md#about-configini-file

詳細については config_ini.md を参照してください。

'config.ini' は FACE01 の設定ファイルであり、Python の ConfigParser モジュールを使用しています。[DEFAULT] セクションは標準のデフォルト値を指定しており、この設定は例です。

'config.ini' を修正する前に、ConfigParser モジュールに精通している必要があります。ConfigParser モジュールについては ConfigParser module documentation を参照してください。

各セクションは [DEFAULT] セクションから継承されます。したがって、各セクションでは [DEFAULT] を上書きする項目(キーと値)のみを指定してください。

config_ini.md
class face01lib.Initialize.Initialize(section: str = 'DEFAULT', log_level: str = 'info')[ソース]

ベースクラス: object

Initialize class.

Load config.ini, return Dict style.

initialize() Dict[ソース]

Initialize values.

戻り値:

CONFIG Dictionary of initialized preferences

戻り値の型:

Dict

サンプル

CONFIG: Dict =  Initialize("SECTION").initialize()

face01lib.LoadImage module

Load image class.

class face01lib.LoadImage.LoadImage(headless: bool, conf_dict: Dict)[ソース]

ベースクラス: object

This class include method to load images.

LI(set_height: int, set_width: int) Tuple[Mat, ...][ソース]

Return values.

Summary:

Load images, and return all together in a tuple.

パラメータ:
  • self -- self

  • set_height (int) -- Height described in config.ini

  • set_width (int) -- Width described in config.ini

戻り値:

Tuple.

  • rect01_png (cv2.Mat): Loaded image data as ndarray

  • rect01_NG_png (cv2.Mat): Loaded image data as ndarray

  • rect01_REAL_png (cv2.Mat): Loaded image data as ndarray

  • rect01_SPOOF_png (cv2.Mat): Loaded image data as ndarray

  • rect01_CANNOT_DISTINCTION_png (cv2.Mat): Loaded image data as ndarray

  • resized_telop_image (Union[cv2.Mat, None]): Loaded image data as ndarray

  • cal_resized_telop_nums : Return Tuple or None

  • resized_logo_image (Union[cv2.Mat, None]): Loaded image data as ndarray or None

  • cal_resized_logo_nums (Union[Tuple[int,int,int,int,npt.NDArray[np.float64],npt.NDArray[np.float64]], None]):

  • load_unregistered_face_image (bool): Bool

  • telop_image (Union[cv2.Mat, None]): Loaded image data as ndarray or None

  • logo_image (Union[cv2.Mat, None]): Loaded image data as ndarray or None

  • unregistered_face_image (Union[cv2.Mat, None]): Loaded image data as ndarray or None

face01lib.api module

Summary.

COPYRIGHT:

This code is based on 'face_recognition' written by Adam Geitgey (ageitgey), and modified by Yoshitsugu Kesamaru (yKesamaru).

ORIGINAL AUTHOR:
  • Dlib
    • davisking

  • face_recognition
    • ageitgey

  • FACE01, and api.py
    • yKesamaru

参照

注釈

About coordinate order...

  • dlib: (Left, Top, Right, Bottom), called 'rect'.

  • face_recognition: (top, right, bottom, left), called 'css'.

See bellow https://github.com/davisking/dlib/blob/master/python_examples/face_recognition.py

DEBUG: MEMORY LEAK
from .memory_leak import Memory_leak
m = Memory_leak(limit=2, key_type='traceback', nframe=20)
m.memory_leak_analyze_start()
See bellow:

[Ja] https://zenn.dev/ykesamaru/articles/bd403aa6d03100

class face01lib.api.Dlib_api(log_level: str = 'error')[ソース]

ベースクラス: object

Dlib api.

Author: Original code written by Adam Geitgey, modified by YOSHITSUGU KESAMARU

Email: y.kesamaru@tokai-kaoninsho.com

JAPANESE_FACE_V1_model_compute_face_descriptor(resized_frame: ndarray[Any, dtype[uint8]], raw_face_landmark, size: int = 224, _PADDING: float = 0.1) ndarray[Any, dtype[float32]][ソース]

EfficientNetV2とArcFaceモデルを使用して顔の特徴量を計算します。

この関数は、与えられた顔の画像データから、EfficientNetV2とArcFaceモデルを使用して顔の特徴量(embedding)を計算します。

パラメータ:
  • resized_frame (npt.NDArray[np.uint8]) -- リサイズされたフレームの画像データ。

  • raw_face_landmark (dlib.rectangle) -- 顔のランドマーク情報。

  • size (int, optional) -- 顔のチップのサイズ。デフォルトは224。

  • _PADDING (float, optional) -- 顔のチップを取得する際のパディング。デフォルトは0.1。

戻り値:

顔の特徴量(embedding)。

戻り値の型:

npt.NDArray[np.float32]

compare_faces(deep_learning_model: int, known_face_encodings: List[ndarray[Any, dtype[float64]]], face_encoding_to_check: ndarray[Any, dtype[float64]], tolerance: float = 0.6, threshold: float = 0.4) Tuple[ndarray, float][ソース]

顔エンコーディングのリストを候補エンコーディングと比較して、それら数値の比較をします。

パラメータ:
  • deep_learning_model (int) -- 0: dlib cnn model, 1: JAPANESE_FACE_V1.onnx

  • known_face_encodings (List[npt.NDArray[np.float64]]) -- known face encodingsのリスト

  • face_encoding_to_check (npt.NDArray[np.float64]) -- リストに対して比較する、単一の顔エンコーディング

  • tolerance (float) -- 顔間の距離がどのくらいあれば一致するとみなされるか。dlibの場合、低いほど厳密で、0.6 が一般的な値です。

  • threshold (float) -- 閾値

戻り値:

どの known_face_encoding がチェック対象の顔エンコーディングに一致するか、およびそれらの間の最小距離を示す True/False 値のタプル。

cosine_similarity(embedding1, embedding2, threshold=0.4)[ソース]

cosine_similarity 特徴量ベクトルを受け取り、類似度を計算し、閾値を超えているかどうかを返す

パラメータ:
  • embedding1 (npt.NDArray) -- feature vector

  • embedding2 (npt.NDArray) -- feature vector

  • threshold (float, optional) -- threshold. Defaults to 0.4.

戻り値:

Returns a tuple of a numpy array of booleans and the minimum cos_sim

戻り値の型:

Tuple[np.array, float]

face_distance(face_encodings: List[ndarray[Any, dtype[float64]]], face_to_compare: ndarray[Any, dtype[float64]]) ndarray[Any, dtype[float64]][ソース]

与えられた顔エンコーディングのリストを既知の顔エンコーディングと比較し、各比較顔のユークリッド距離を返します.

距離が近ければ、顔がどれだけ似ているかが分かります。

パラメータ:
  • face_encodings (List[npt.NDArray[np.float64]]) -- List of face encodings to compare (=small_frame)

  • face_to_compare (npt.NDArray[np.float64]) -- A face encoding to compare against (=face_location_list)

戻り値:

顔(名前)配列と同じ順序の、顔同士の距離である numpy ndarray を返します

戻り値の型:

npt.NDArray[np.float64]

face_encodings(deep_learning_model: int, resized_frame: ndarray[Any, dtype[uint8]], face_location_list: List = [], num_jitters: int = 0, model: str = 'small') List[ndarray][ソース]

Given an image, return the 128-dimension face encoding for each face in the image.

パラメータ:
  • resized_frame (npt.NDArray[np.uint8]) -- The image that contains one or more faces (=small_frame)

  • face_location_list (List) -- Optional - the bounding boxes of each face if you already know them. (=face_location_list)

  • num_jitters (int) -- How many times to re-sample the face when calculating encoding. Higher is more accurate, but slower (i.e. 100 is 100x slower)

  • model (str) -- Do not modify.

戻り値:

A list of 128-dimensional face encodings (one for each face in the image).

If deep_learning_model == 1, the returned list contains 512-dimensional face encodings, with the type List[npt.NDArray[np.float32]].

Image size: The image should be of size 150x150. Also, cropping must be done as dlib.get_face_chip would do it. That is, centered and scaled essentially the same way.

戻り値の型:

List[npt.NDArray[np.float64]]

参考

class dlib.face_recognition_model_v1: compute_face_descriptor(*args, **kwargs): http://dlib.net/python/index.html#dlib_pybind11.face_recognition_model_v1

compute_face_descriptor(*args, **kwargs): http://dlib.net/python/index.html#dlib_pybind11.face_recognition_model_v1.compute_face_descriptor

face_locations(resized_frame: ndarray[Any, dtype[uint8]], number_of_times_to_upsample: int = 0, mode: str = 'hog') List[Tuple[int, int, int, int]][ソース]

Returns an array of bounding boxes of human faces in a image.

This method used only 'use_pipe = False'.

パラメータ:
  • resized_frame (npt.NDArray[np.uint8]) -- Resized image

  • number_of_times_to_upsample (int) -- How many times to upsample the image looking for faces. Higher numbers find smaller faces.

  • mode (str) -- Which face detection mode to use. "hog" is less accurate but faster on CPUs. "cnn" is a more accurate deep-learning mode which is GPU/CUDA accelerated (if available). The default is "hog".

戻り値:

A list of tuples of found face locations in css (top, right, bottom, left) order

load_image_file(file: str, mode: str = 'RGB') ndarray[Any, dtype[uint8]][ソース]

Loads an image file (.jpg, .png, etc) into a numpy array.

パラメータ:
  • file (str) -- image file name or file object to load

  • mode (str) -- format to convert the image to. Only 'RGB' (8-bit RGB, 3 channels) and 'L' (black and white) are supported.

戻り値:

image contents as numpy array

戻り値の型:

npt.NDArray[np.uint8]

percentage(cos_sim)[ソース]

percentage 与えられた cos_sim から類似度を計算する

パラメータ:

cos_sim (float) -- cosine similarity

戻り値:

percentage of similarity

戻り値の型:

float

face01lib.combine module

class face01lib.combine.Comb(log_level: str = 'info')[ソース]

ベースクラス: object

static comb(a: List[T], b: List[T]) List[T][ソース]

face01lib.load_preset_image module

顔画像ファイルから`npKnown.npz`ファイルを作成する.

Summary:

`preset_face_images`ディレクトリを捜査して、`npKnown.npz`ファイルを作成します。

class face01lib.load_preset_image.LoadPresetImage(log_level: str = 'info')[ソース]

ベースクラス: object

load_preset_image(deep_learning_model: int, RootDir: str, preset_face_imagesDir: str, upsampling: int = 0, jitters: int = 100, mode: str = 'hog', model: str = 'small') Tuple[List[ndarray], List[str]][ソース]

load_preset_image npKnown.npzを作成する.

パラメータ:
  • deep_learning_model (int) -- 0または1. 0: dlib, 1: JAPANESE FACE V1

  • RootDir (str) -- npKnown.npzを作成するディレクトリ

  • preset_face_imagesDir (str) -- 顔画像が格納されているディレクトリ

  • upsampling (int, optional) -- upsampling値. Defaults to 0.

  • jitters (int, optional) -- jitter値. Defaults to 100.

  • mode (str, optional) -- HOG OR CNN. Defaults to 'hog'.

  • model (str, optional) -- Defaults to 'small'.

戻り値:

npKnown.npzを作成する Tuple[List, List]: known_face_encodings_list, known_face_names_list

  • known_face_encodings_list
    • List of encoded many face images as ndarray

  • known_face_names_list
    • List of name which encoded as ndarray

戻り値の型:

Tuple[List[np.ndarray], List[str]]

サンプル

>>> known_face_encodings, known_face_names = LoadPresetImage().load_preset_image(
        self,
        self.conf_dict["RootDir"],
        self.conf_dict["preset_face_imagesDir"]
    )

face01lib.load_preset_image_bk module

load_preset_image.pyのバックアップ用

このファイルは使用されません。

class face01lib.load_preset_image_bk.LoadPresetImage(log_level: str = 'info')[ソース]

ベースクラス: object

load_preset_image(deep_learning_model: int, RootDir: str, preset_face_imagesDir: str, upsampling: int = 0, jitters: int = 100, mode: str = 'hog', model: str = 'small') Tuple[List, List][ソース]

Load face image from preset_face_images folder.

パラメータ:
  • deep_learning_model (int) -- You can select from 0 or 1. 0 is Dlib, 1 is Efficientnetv2_arcface.onnx.

  • RootDir (str) -- Root directory. npKnown.npzが作成されるディレクトリ。

  • preset_face_imagesDir (str) -- Path to preset_face_images folder. pngファイルが存在するディレクトリ。

  • upsampling (int, optional) -- Value of upsampling. Defaults to 0.

  • jitters (int, optional) -- Value of jitters. Defaults to 100.

  • mode (str, optional) -- You can select from hog or cnn. Defaults to 'hog'.

  • model (str, optional) -- You cannot modify this value.

戻り値:

known_face_encodings_list, known_face_names_list

  • known_face_encodings_list
    • List of encoded many face images as ndarray

  • known_face_names_list
    • List of name which encoded as ndarray

戻り値の型:

Tuple[List, List]

サンプル

>>> known_face_encodings, known_face_names = LoadPresetImage().load_preset_image(
    self,
    self.conf_dict["RootDir"],
    self.conf_dict["preset_face_imagesDir"]
)

face01lib.logger module

Manage log.

class face01lib.logger.Logger(log_level: str = 'info')[ソース]

ベースクラス: object

Set log level.

logger(name: str, dir: str)[ソース]

Manage log.

パラメータ:
  • name (str) -- File name

  • dir (str) -- Directory name. (Usually the root directory of FACE01)

戻り値:

logger

戻り値の型:

Logger object

注釈

parent_dir in the above example refers to the root directory of FACE01.
i.e. CONFIG['RootDir'] in the code below.
# Initialize
CONFIG: Dict =  Initialize('LIGHTWEIGHT_GUI', 'info').initialize()
# Set up logger
logger = Logger(CONFIG['log_level']).logger(__file__, CONFIG['RootDir'])

face01lib.return_face_image module

Return face image data as ndarray.

class face01lib.return_face_image.Return_face_image[ソース]

ベースクラス: object

This class include a method for return face image function.

return_face_image(resized_frame: ndarray[Any, dtype[uint8]], face_location: Tuple[int, int, int, int]) ndarray[Any, dtype[uint8]][ソース]

Return face image array which contain ndarray.

パラメータ:
  • resized_frame (numpy.ndarray) -- frame data

  • face_location (tuple) -- face location which ordered top, right, bottom, left

戻り値:

face image of ndarray or empty array

戻り値の型:

list ( npt.NDArray[np.uint8])

face01lib.spoof module

Spoof Module.

注釈

このモジュールは、試験段階です。

このモジュールは、顔の特徴点の検出、オブジェクトの検出、およびQRコードの生成に関連する機能を提供します。

主なクラス:
  • Spoof: 顔の特徴点の検出、オブジェクトの検出、およびQRコードの生成を行うメソッドを持つクラス。

主なメソッド:
  • iris: 顔の虹彩を検出し、その特徴点を描画します。

  • obj_detect: オブジェクトを検出し、その特徴点を描画します。

  • make_qr_code: QRコードを生成します。

使用例:

spoof_obj = Spoof() spoof_obj.iris() spoof_obj.obj_detect() spoof_obj.make_qr_code()

Source code:

spoof.py

class face01lib.spoof.Spoof(log_level: str = 'info')[ソース]

ベースクラス: object

class EyeState(value)[ソース]

ベースクラス: Enum

An enumeration.

BLINKING = 3
CLOSED = 2
OPEN = 1

まばたきを検知したときにTrueを返します.

パラメータ:

frame_datas_array (array[Dict]) -- frame_datas_array

戻り値:

If eye blink is detected, return True. Otherwise, return False.

戻り値の型:

bool

one point

以下のリンクに論文と数式をまとめました⭐️''

まばたきを検知するPythonコードの解説

make_qr_code()[ソース]

make_qr_code QRCodeを作成するメソッド. Summary:

QRCode"だけ"を作成します。

注釈

入力される映像には1人のみ映っているようにしてください。 preset_face_imagesにデフォルト顔画像が登録されていない場合、正常に動作しません。

one point

exampleディレクトリの'make_ID_card.py'も参考にしてください⭐️''

obj_detect()[ソース]

mediapipeのテストのためのメソッド オブジェクト検出を行います。(この場合は「靴」)

face01lib.utils module

The utils class.

When creating a deep learning model, we usually perform data augmentation processing to increase the base data. In general, multiple aberrations that occur are mainly corrected by calibration. However, as far as I have seen, heard and experienced, it is "common way" that it is not calibrated (except for strong face recognition). As long as we use the normal model, this leads to a large accuracy loss.

Face distortion. Image taken from https://imgur.com/VdKIQqF Image taken from https://tokai-kaoninsho.com

By using the utils.distort_barrel() method, we believe that we can greatly ensure the robustness of distortion caused by the camera lens.

Image taken from https://tokai-kaoninsho.com Image taken from https://tokai-kaoninsho.com

注釈

ImageMagick must be installed on your system. - See ImageMagick https://imagemagick.org/script/download.php

class face01lib.utils.Utils(log_level: str = 'info')[ソース]

ベースクラス: object

Utils class.

contains convenience methods

align_and_resize_maintain_aspect_ratio(path: str, upper_limit_length: int = 1024, padding: float = 0.4, size: int = 224, contain: str = '') List[str][ソース]

入力画像をアラインメントしてリサイズ(アスペクト比を維持).

パラメータ:
  • path (str) -- ファイル名を含むファイルパス。('.jpg' または '.jpeg' または '.png'。これらは小文字でなければなりません。) パスがディレクトリの場合、そのディレクトリに含まれるすべてのファイルが対象です。

  • upper_limit_length (int, optional) -- 幅の上限長さ。デフォルトは1024。

  • padding (float, optional) -- 顔の周りのパディング。大 = 0.8、中 = 0.4、小 = 0.25、非常に小さい = 0.1。デフォルトは0.4。

  • size (int, optional) -- 画像データのリサイズ後のサイズ。デフォルトは224。

  • contain (str, optional) -- ディレクトリ内のファイル名に含まれる単語。

戻り値:

アラインメントとリサイズに失敗したファイルのリスト。

戻り値の型:

error_files (list)

Result:
Image taken from https://tokai-kaoninsho.com

注釈

入力画像ファイルの幅が'1024px'を超える場合、アスペクト比を維持しながら'1024px'にリサイズされます。

create_concat_images(img: str, size: int = 224) None[ソース]

Create tile images.

パラメータ:
  • img (str) -- absolute file path

  • size (int) -- image size. Default is 224.

Result:
Image taken from https://tokai-kaoninsho.com Image taken from https://tokai-kaoninsho.com
distort_barrel(dir_path: str, align_and_resize_bool: bool = False, size: int = 224, padding: float = 0.1, initial_value: float = -0.1, closing_value: float = 0.1, step_value: float = 0.1) List[str][ソース]

Distort barrel.

Takes a path which contained png, jpg, jpeg files in the directory, distort barrel and saves them.

パラメータ:
  • dir_path (str) -- absolute path of target directory.

  • align_and_resize_bool (bool, optional) -- Whether to align and resize. Defaults to False.

  • size (int, optional) -- Width and height. Defaults to 224.

  • padding (float, optional) -- Padding. Defaults to 0.1.

  • initial_value (float) -- Initial value. Default is -0.1.

  • closing_value (float) -- Closing value. Default is 0.1.

  • step_value (float) -- Step value. Default is 0.1.

戻り値:

Path list of processed files.

注釈

ImageMagick must be installed on your system. - See ImageMagick https://imagemagick.org/script/download.php

Result:
Image taken from https://tokai-kaoninsho.com
get_face_encoding(deep_learning_model: int, image_path: str, num_jitters: int = 0, number_of_times_to_upsample: int = 0, mode: str = 'cnn', model: str = 'small')[ソース]

get_face_encoding : get face encoding from image file.

パラメータ:
  • deep_learning_model (int) -- dli model: 0, efficientnetv2_arcface model: 1

  • image_path (str) -- image file path.

  • num_jitters (int, optional) -- Number of jitters. Defaults to 0.

  • number_of_times_to_upsample (int, optional) -- Number of times to upsample the image looking for faces. Defaults to 0.

  • mode (str, optional) -- cnn or hog. Defaults to 'cnn'.

  • model (str, optional) -- small or large. Defaults to 'small'.

戻り値:

face encoding data or None if not detected face.

戻り値の型:

NDArray data (npt.NDArray[np.float32])

get_files_from_path(path: str, contain: str = 'resize') list[ソース]

Receive path, return files.

パラメータ:
  • path (str) -- Directory path

  • contain (str) -- Contain word. If you want to get all files, set *. Default is resize.

戻り値:

Files in received path (absolute path)

戻り値の型:

list

get_jitter_image(dir_path: str, num_jitters: int = 10, size: int = 224, disturb_color: bool = True)[ソース]

Jitter images at the specified path.

パラメータ:
  • dir_path (str) -- path of target directory.

  • num_jitters (int, optional) -- Number of jitters. Defaults to 10.

  • size (int, optional) -- Resize image to size(px). Defaults to 224px.

  • disturb_color (bool, optional) -- Disturb color. Defaults to True.

Note: This method is based on davisking/dlib/python_example/face_jitter.py. https://github.com/davisking/dlib/blob/master/python_examples/face_jitter.py

resize_image(img: ndarray, upper_limit_length: int = 1024) ndarray[ソース]

resize_image : resize image.

The input np.ndarray format image data is resized to fit the specified width or height. In this process, the aspect ratio is maintained by resizing based on the longer side of the width and height. The default maximum values for width and height are 1024px.

パラメータ:
  • img (np.ndarray) -- image data.

  • upper_limit_length (int, optional) -- upper limit length. Defaults to 1024.

戻り値:

resized image data.

戻り値の型:

np.ndarray

return_qr_code(face_encodings) List[ndarray][ソース]

return_qr_code : return qr code.

Summary:

This method returns a QR code based on the face encoding list.

パラメータ:

face_encodings (List) -- face encoding list.

戻り値:

qr code.

戻り値の型:

List

参考

example/make_ID_card.py

Results:
_images/ID_card_sample.png
temp_sleep(temp: float = 80.0, sleep_time: int = 60)[ソース]

temp_sleep : sleep time for cpu temperature.

If the CPU temperature exceeds the value specified by the argument temp, it sleeps for the time specified by sleep_time. If the sensors command fails to get the CPU temperature, it will try to execute it 3 times at 1 second intervals. If it still can't get it, exit the program.

パラメータ:
  • temp (float, optional) -- cpu temperature. Defaults to 80.0.

  • sleep_time (int, optional) -- sleep time. Defaults to 60.

戻り値:

None

注釈

The sensors and notify-send commands are required to use this method. The sensors command is included in the lm-sensors package. The notify-send command is included in the libnotify-bin package.

face01lib.video_capture module

The VidCap class.

class face01lib.video_capture.VidCap(log_level: str = 'info')[ソース]

ベースクラス: object

VidCap class.

contains methods that initially process the input video data

finalize(vcap) None[ソース]

Release vcap and Destroy window.

パラメータ:

vcap (cv2.VideoCapture) -- vcap which is handle of input video process

frame_generator(CONFIG: Dict) Generator[ソース]

Generator: Return resized frame data.

パラメータ:

CONFIG (Dict) -- CONFIG

例外:

StopIteration -- ret == False, then raise StopIteration

列挙:

Generator -- Resized frame data (npt.NDArray[np.uint8])

frame_imshow_for_debug(frame: ndarray[Any, dtype[uint8]]) None[ソース]

Used for debugging.

Display the given frame data in a GUI window for 3 seconds.

パラメータ:

frame (npt.NDArray[np.uint8]) -- Image data called 'frame'

戻り値:

None

resize_frame(set_width: int, set_height: int, frame: ndarray[Any, dtype[uint8]]) ndarray[Any, dtype[uint8]][ソース]

Return resized frame data.

パラメータ:
  • set_width (int) -- Width described in config.ini

  • set_height (int) -- Height described in config.ini

  • frame (npt.NDArray[np.uint8]) -- Image data

戻り値:

small_frame

戻り値の型:

npt.NDArray[np.uint8]

return_movie_property(set_width: int, vcap) Tuple[int, ...][ソース]

Return input movie file's property.

パラメータ:
  • set_width (int) -- Width which set in config.ini

  • vcap (cv2.VideoCapture) -- Handle of input movie processing

戻り値:

self.set_width, fps, height, width, set_height

戻り値の型:

Tuple[int,...]

return_vcap(movie: str) VideoCapture[ソース]

Return vcap object.

パラメータ:

movie (str) -- movie

戻り値:

cv2.VideoCapture

戻り値の型:

object

Module contents