Compare commits
62 Commits
ticket-769
...
main
|
@ -0,0 +1,60 @@
|
|||
# GitLib
|
||||
|
||||
The `gitapi.py` is an API for OgGit, written in Python/Flask.
|
||||
|
||||
It is an HTTP server that receives commands and executes maintenance actions including the creation and deletion of repositories.
|
||||
|
||||
|
||||
# Installing Python dependencies
|
||||
|
||||
The conversion of the code to Python 3 currently requires the packages specified in `requirements.txt`.
|
||||
|
||||
To install Python dependencies, the `venv` module is used (https://docs.python.org/3/library/venv.html), which installs all dependencies in an environment independent of the system.
|
||||
|
||||
|
||||
# Usage
|
||||
|
||||
|
||||
# Ubuntu 24.04
|
||||
|
||||
sudo apt install -y python3-flask python3-paramiko opengnsys-flask-executor opengnsys-flask-restx
|
||||
|
||||
The `opengnsys-flask-executor` and `opengnsys-flask-restx` packages are available on the OpenGnsys package server.
|
||||
|
||||
Run with:
|
||||
|
||||
./gitapi.py
|
||||
|
||||
**Note:** Run as `opengnsys`, as it manages the images located in `/opt/opengnsys/images`.
|
||||
|
||||
|
||||
# Documentation
|
||||
|
||||
Python documentation can be generated using a utility like pdoc3 (there are multiple possible alternatives):
|
||||
|
||||
# Install pdoc3
|
||||
pip install --user pdoc3
|
||||
|
||||
# Generate documentation
|
||||
pdoc3 --force --html opengnsys_git_installer.py
|
||||
|
||||
# Operation
|
||||
|
||||
## Requirements
|
||||
|
||||
The gitapi is designed to run within an existing opengnsys environment. It should be installed in an ogrepository.
|
||||
|
||||
## API Examples
|
||||
|
||||
### Get list of branches
|
||||
|
||||
$ curl -L http://localhost:5000/repositories/linux/branches
|
||||
{
|
||||
"branches": [
|
||||
"master"
|
||||
]
|
||||
}
|
||||
|
||||
### Synchronize with remote repository
|
||||
|
||||
curl --header "Content-Type: application/json" --data '{"remote_repository":"foobar"}' -X POST -L http://localhost:5000/repositories/linux/sync
|
|
@ -1,4 +1,4 @@
|
|||
# GitLib
|
||||
# Git API
|
||||
|
||||
La `gitapi.py` es una API para OgGit, escrita en Python/Flask.
|
||||
|
||||
|
@ -59,7 +59,7 @@ La gitapi esta diseñada para funcionar dentro de un entorno opengnsys existente
|
|||
$ curl -L http://localhost:5000/repositories/linux/branches
|
||||
{
|
||||
"branches": [
|
||||
"master"
|
||||
"master"
|
||||
]
|
||||
}
|
||||
|
||||
|
|
|
@ -1,32 +1,103 @@
|
|||
from flask import Flask, jsonify
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
This module provides a Flask-based API for managing Git repositories in the OpenGnsys system.
|
||||
It includes endpoints for creating, deleting, synchronizing, backing up, and performing garbage
|
||||
collection on Git repositories. The API also provides endpoints for retrieving repository
|
||||
information such as the list of repositories and branches, as well as checking the status of
|
||||
asynchronous tasks.
|
||||
|
||||
Classes:
|
||||
None
|
||||
|
||||
Functions:
|
||||
do_repo_backup(repo, params)
|
||||
|
||||
do_repo_sync(repo, params)
|
||||
|
||||
do_repo_gc(repo)
|
||||
|
||||
home()
|
||||
|
||||
get_repositories()
|
||||
|
||||
create_repo(repo)
|
||||
|
||||
sync_repo(repo)
|
||||
|
||||
backup_repository(repo)
|
||||
|
||||
gc_repo(repo)
|
||||
|
||||
tasks_status(task_id)
|
||||
|
||||
delete_repo(repo)
|
||||
|
||||
get_repository_branches(repo)
|
||||
|
||||
health_check()
|
||||
|
||||
Constants:
|
||||
REPOSITORIES_BASE_PATH (str): The base path where Git repositories are stored.
|
||||
|
||||
Global Variables:
|
||||
app (Flask): The Flask application instance.
|
||||
executor (Executor): The Flask-Executor instance for managing asynchronous tasks.
|
||||
tasks (dict): A dictionary to store the status of asynchronous tasks.
|
||||
"""
|
||||
|
||||
# pylint: disable=locally-disabled, line-too-long
|
||||
|
||||
import os.path
|
||||
import os
|
||||
import git
|
||||
import shutil
|
||||
import subprocess
|
||||
import uuid
|
||||
import git
|
||||
import time
|
||||
from opengnsys_git_installer import OpengnsysGitInstaller
|
||||
from flask import Flask, request
|
||||
from flask import Flask, request, jsonify # stream_with_context, Response,
|
||||
from flask_executor import Executor
|
||||
import subprocess
|
||||
from flask import stream_with_context, Response
|
||||
from flask_restx import Api, Resource, fields
|
||||
#from flasgger import Swagger
|
||||
import paramiko
|
||||
|
||||
repositories_base_path = "/opt/opengnsys/images"
|
||||
REPOSITORIES_BASE_PATH = "/opt/opengnsys/images"
|
||||
|
||||
start_time = time.time()
|
||||
tasks = {}
|
||||
|
||||
|
||||
# Create an instance of the Flask class
|
||||
app = Flask(__name__)
|
||||
api = Api(app,
|
||||
version='0.50',
|
||||
title = "OpenGnsys Git API",
|
||||
description = "API for managing disk images stored in Git",
|
||||
doc = "/swagger/")
|
||||
|
||||
git_ns = api.namespace(name = "oggit", description = "Git operations", path = "/oggit/v1")
|
||||
|
||||
executor = Executor(app)
|
||||
|
||||
|
||||
|
||||
tasks = {}
|
||||
|
||||
|
||||
|
||||
def do_repo_backup(repo, params):
|
||||
"""
|
||||
Creates a backup of the specified Git repository and uploads it to a remote server via SFTP.
|
||||
|
||||
gitrepo = git.Repo(f"{repositories_base_path}/{repo}.git")
|
||||
Args:
|
||||
repo (str): The name of the repository to back up.
|
||||
params (dict): A dictionary containing the following keys:
|
||||
- ssh_server (str): The SSH server address.
|
||||
- ssh_port (int): The SSH server port.
|
||||
- ssh_user (str): The SSH username.
|
||||
- filename (str): The remote filename where the backup will be stored.
|
||||
|
||||
Returns:
|
||||
bool: True if the backup was successful.
|
||||
"""
|
||||
|
||||
gitrepo = git.Repo(f"{REPOSITORIES_BASE_PATH}/{repo}.git")
|
||||
|
||||
ssh = paramiko.SSHClient()
|
||||
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
|
||||
|
@ -42,320 +113,380 @@ def do_repo_backup(repo, params):
|
|||
return True
|
||||
|
||||
def do_repo_sync(repo, params):
|
||||
gitrepo = git.Repo(f"{repositories_base_path}/{repo}.git")
|
||||
"""
|
||||
Synchronizes a local Git repository with a remote repository.
|
||||
|
||||
Args:
|
||||
repo (str): The name of the local repository to synchronize.
|
||||
params (dict): A dictionary containing the remote repository URL with the key "remote_repository".
|
||||
|
||||
Returns:
|
||||
list: A list of dictionaries, each containing:
|
||||
- "local_ref" (str): The name of the local reference.
|
||||
- "remote_ref" (str): The name of the remote reference.
|
||||
- "summary" (str): A summary of the push operation for the reference.
|
||||
"""
|
||||
gitrepo = git.Repo(f"{REPOSITORIES_BASE_PATH}/{repo}.git")
|
||||
|
||||
# Recreate the remote every time, it might change
|
||||
if "backup" in gitrepo.remotes:
|
||||
gitrepo.delete_remote("backup")
|
||||
|
||||
backup_repo = gitrepo.create_remote("backup", params["remote_repository"])
|
||||
pushrets = backup_repo.push("*:*")
|
||||
pushed_references = backup_repo.push("*:*")
|
||||
results = []
|
||||
|
||||
# This gets returned to the API
|
||||
for ret in pushrets:
|
||||
results = results + [ {"local_ref" : ret.local_ref.name, "remote_ref" : ret.remote_ref.name, "summary" : ret.summary }]
|
||||
for ref in pushed_references:
|
||||
results = results + [ {"local_ref" : ref.local_ref.name, "remote_ref" : ref.remote_ref.name, "summary" : ref.summary }]
|
||||
|
||||
return results
|
||||
|
||||
def do_repo_gc(repo):
|
||||
gitrepo = git.Repo(f"{repositories_base_path}/{repo}.git")
|
||||
"""
|
||||
Perform garbage collection on the specified Git repository.
|
||||
|
||||
Args:
|
||||
repo (str): The name of the repository to perform garbage collection on.
|
||||
|
||||
Returns:
|
||||
bool: True if the garbage collection command was executed successfully.
|
||||
"""
|
||||
gitrepo = git.Repo(f"{REPOSITORIES_BASE_PATH}/{repo}.git")
|
||||
|
||||
gitrepo.git.gc()
|
||||
return True
|
||||
|
||||
|
||||
# Define a route for the root URL
|
||||
@app.route('/')
|
||||
def home():
|
||||
"""
|
||||
Home route that returns a JSON response with a welcome message for the OpenGnsys Git API.
|
||||
@api.route('/')
|
||||
class GitLib(Resource):
|
||||
|
||||
Returns:
|
||||
Response: A Flask JSON response containing a welcome message.
|
||||
"""
|
||||
return jsonify({
|
||||
"message": "OpenGnsys Git API"
|
||||
})
|
||||
@api.doc('home')
|
||||
def get(self):
|
||||
"""
|
||||
Home route that returns a JSON response with a welcome message for the OpenGnsys Git API.
|
||||
|
||||
@app.route('/repositories')
|
||||
def get_repositories():
|
||||
"""
|
||||
Retrieve a list of Git repositories.
|
||||
Returns:
|
||||
Response: A Flask JSON response containing a welcome message.
|
||||
"""
|
||||
return {
|
||||
"message": "OpenGnsys Git API"
|
||||
}
|
||||
|
||||
This endpoint scans the OpenGnsys image path for directories that
|
||||
appear to be Git repositories (i.e., they contain a "HEAD" file).
|
||||
It returns a JSON response containing the names of these repositories.
|
||||
@git_ns.route('/oggit/v1/repositories')
|
||||
class GitRepositories(Resource):
|
||||
def get(self):
|
||||
"""
|
||||
Retrieve a list of Git repositories.
|
||||
|
||||
Returns:
|
||||
Response: A JSON response with a list of repository names or an
|
||||
error message if the repository storage is not found.
|
||||
- 200 OK: When the repositories are successfully retrieved.
|
||||
- 500 Internal Server Error: When the repository storage is not found.
|
||||
This endpoint scans the OpenGnsys image path for directories that
|
||||
appear to be Git repositories (i.e., they contain a "HEAD" file).
|
||||
It returns a JSON response containing the names of these repositories.
|
||||
|
||||
Example JSON response:
|
||||
{
|
||||
"repositories": ["repo1", "repo2"]
|
||||
}
|
||||
"""
|
||||
Returns:
|
||||
Response: A JSON response with a list of repository names or an
|
||||
error message if the repository storage is not found.
|
||||
- 200 OK: When the repositories are successfully retrieved.
|
||||
- 500 Internal Server Error: When the repository storage is not found.
|
||||
|
||||
if not os.path.isdir(repositories_base_path):
|
||||
return jsonify({"error": "Repository storage not found, git functionality may not be installed."}), 500
|
||||
Example JSON response:
|
||||
{
|
||||
"repositories": ["repo1", "repo2"]
|
||||
}
|
||||
"""
|
||||
|
||||
repos = []
|
||||
for entry in os.scandir(repositories_base_path):
|
||||
if entry.is_dir(follow_symlinks=False) and os.path.isfile(os.path.join(entry.path, "HEAD")):
|
||||
name = entry.name
|
||||
if name.endswith(".git"):
|
||||
name = name[:-4]
|
||||
if not os.path.isdir(REPOSITORIES_BASE_PATH):
|
||||
return jsonify({"error": "Repository storage not found, git functionality may not be installed."}), 500
|
||||
|
||||
repos = repos + [name]
|
||||
repos = []
|
||||
for entry in os.scandir(REPOSITORIES_BASE_PATH):
|
||||
if entry.is_dir(follow_symlinks=False) and os.path.isfile(os.path.join(entry.path, "HEAD")):
|
||||
name = entry.name
|
||||
if name.endswith(".git"):
|
||||
name = name[:-4]
|
||||
|
||||
return jsonify({
|
||||
"repositories": repos
|
||||
})
|
||||
repos = repos + [name]
|
||||
|
||||
@app.route('/repositories/<repo>', methods=['PUT'])
|
||||
def create_repo(repo):
|
||||
"""
|
||||
Create a new Git repository.
|
||||
return jsonify({
|
||||
"repositories": repos
|
||||
})
|
||||
|
||||
This endpoint creates a new Git repository with the specified name.
|
||||
If the repository already exists, it returns a status message indicating so.
|
||||
def post(self):
|
||||
"""
|
||||
Create a new Git repository.
|
||||
|
||||
Args:
|
||||
repo (str): The name of the repository to be created.
|
||||
This endpoint creates a new Git repository with the specified name.
|
||||
If the repository already exists, it returns a status message indicating so.
|
||||
|
||||
Returns:
|
||||
Response: A JSON response with a status message and HTTP status code.
|
||||
- 200: If the repository already exists.
|
||||
- 201: If the repository is successfully created.
|
||||
"""
|
||||
repo_path = os.path.join(repositories_base_path, repo + ".git")
|
||||
if os.path.isdir(repo_path):
|
||||
return jsonify({"status": "Repository already exists"}), 200
|
||||
Args:
|
||||
repo (str): The name of the repository to be created.
|
||||
|
||||
Returns:
|
||||
Response: A JSON response with a status message and HTTP status code.
|
||||
- 200: If the repository already exists.
|
||||
- 201: If the repository is successfully created.
|
||||
"""
|
||||
data = request.json
|
||||
|
||||
installer = OpengnsysGitInstaller()
|
||||
installer._init_git_repo(repo + ".git")
|
||||
if data is None:
|
||||
return jsonify({"error" : "Parameters missing"}), 400
|
||||
|
||||
repo = data["name"]
|
||||
|
||||
return jsonify({"status": "Repository created"}), 201
|
||||
repo_path = os.path.join(REPOSITORIES_BASE_PATH, repo + ".git")
|
||||
if os.path.isdir(repo_path):
|
||||
return jsonify({"status": "Repository already exists"}), 200
|
||||
|
||||
|
||||
@app.route('/repositories/<repo>/sync', methods=['POST'])
|
||||
def sync_repo(repo):
|
||||
"""
|
||||
Synchronize a repository with a remote repository.
|
||||
installer = OpengnsysGitInstaller()
|
||||
installer.add_forgejo_repo(repo)
|
||||
|
||||
This endpoint triggers the synchronization process for a specified repository.
|
||||
It expects a JSON payload with the remote repository details.
|
||||
#installer.init_git_repo(repo + ".git")
|
||||
|
||||
Args:
|
||||
repo (str): The name of the repository to be synchronized.
|
||||
|
||||
Returns:
|
||||
Response: A JSON response indicating the status of the synchronization process.
|
||||
- 200: If the synchronization process has started successfully.
|
||||
- 400: If the request payload is missing or invalid.
|
||||
- 404: If the specified repository is not found.
|
||||
"""
|
||||
repo_path = os.path.join(repositories_base_path, repo + ".git")
|
||||
if not os.path.isdir(repo_path):
|
||||
return jsonify({"error": "Repository not found"}), 404
|
||||
return jsonify({"status": "Repository created"}), 201
|
||||
|
||||
|
||||
data = request.json
|
||||
@git_ns.route('/oggit/v1/repositories/<repo>/sync')
|
||||
class GitRepoSync(Resource):
|
||||
def post(self, repo):
|
||||
"""
|
||||
Synchronize a repository with a remote repository.
|
||||
|
||||
if data is None:
|
||||
return jsonify({"error" : "Parameters missing"}), 400
|
||||
This endpoint triggers the synchronization process for a specified repository.
|
||||
It expects a JSON payload with the remote repository details.
|
||||
|
||||
dest_repo = data["remote_repository"]
|
||||
Args:
|
||||
repo (str): The name of the repository to be synchronized.
|
||||
|
||||
Returns:
|
||||
Response: A JSON response indicating the status of the synchronization process.
|
||||
- 200: If the synchronization process has started successfully.
|
||||
- 400: If the request payload is missing or invalid.
|
||||
- 404: If the specified repository is not found.
|
||||
"""
|
||||
repo_path = os.path.join(REPOSITORIES_BASE_PATH, repo + ".git")
|
||||
if not os.path.isdir(repo_path):
|
||||
return jsonify({"error": "Repository not found"}), 404
|
||||
|
||||
future = executor.submit(do_repo_sync, repo, data)
|
||||
task_id = str(uuid.uuid4())
|
||||
tasks[task_id] = future
|
||||
return jsonify({"status": "started", "task_id" : task_id}), 200
|
||||
|
||||
data = request.json
|
||||
|
||||
@app.route('/repositories/<repo>/backup', methods=['POST'])
|
||||
def backup_repo(repo):
|
||||
"""
|
||||
Backup a specified repository.
|
||||
if data is None:
|
||||
return jsonify({"error" : "Parameters missing"}), 400
|
||||
|
||||
Endpoint: POST /repositories/<repo>/backup
|
||||
future = executor.submit(do_repo_sync, repo, data)
|
||||
task_id = str(uuid.uuid4())
|
||||
tasks[task_id] = future
|
||||
return jsonify({"status": "started", "task_id" : task_id}), 200
|
||||
|
||||
Args:
|
||||
repo (str): The name of the repository to back up.
|
||||
|
||||
Request Body (JSON):
|
||||
ssh_port (int, optional): The SSH port to use for the backup. Defaults to 22.
|
||||
|
||||
Returns:
|
||||
Response: A JSON response indicating the status of the backup operation.
|
||||
- If the repository is not found, returns a 404 error with a message.
|
||||
- If the request body is missing, returns a 400 error with a message.
|
||||
- If the backup process starts successfully, returns a 200 status with the task ID.
|
||||
@git_ns.route('/oggit/v1/repositories/<repo>/backup')
|
||||
class GitRepoBackup(Resource):
|
||||
def backup_repository(self, repo):
|
||||
"""
|
||||
Backup a specified repository.
|
||||
|
||||
Notes:
|
||||
- The repository path is constructed by appending ".git" to the repository name.
|
||||
- The backup operation is performed asynchronously using a thread pool executor.
|
||||
- The task ID of the backup operation is generated using UUID and stored in a global tasks dictionary.
|
||||
"""
|
||||
repo_path = os.path.join(repositories_base_path, repo + ".git")
|
||||
if not os.path.isdir(repo_path):
|
||||
return jsonify({"error": "Repository not found"}), 404
|
||||
Endpoint: POST /repositories/<repo>/backup
|
||||
|
||||
Args:
|
||||
repo (str): The name of the repository to back up.
|
||||
|
||||
data = request.json
|
||||
if data is None:
|
||||
return jsonify({"error" : "Parameters missing"}), 400
|
||||
Request Body (JSON):
|
||||
ssh_port (int, optional): The SSH port to use for the backup. Defaults to 22.
|
||||
|
||||
Returns:
|
||||
Response: A JSON response indicating the status of the backup operation.
|
||||
- If the repository is not found, returns a 404 error with a message.
|
||||
- If the request body is missing, returns a 400 error with a message.
|
||||
- If the backup process starts successfully, returns a 200 status with the task ID.
|
||||
|
||||
if not "ssh_port" in data:
|
||||
data["ssh_port"] = 22
|
||||
Notes:
|
||||
- The repository path is constructed by appending ".git" to the repository name.
|
||||
- The backup operation is performed asynchronously using a thread pool executor.
|
||||
- The task ID of the backup operation is generated using UUID and stored in a global tasks dictionary.
|
||||
"""
|
||||
repo_path = os.path.join(REPOSITORIES_BASE_PATH, repo + ".git")
|
||||
if not os.path.isdir(repo_path):
|
||||
return jsonify({"error": "Repository not found"}), 404
|
||||
|
||||
|
||||
future = executor.submit(do_repo_backup, repo, data)
|
||||
task_id = str(uuid.uuid4())
|
||||
tasks[task_id] = future
|
||||
data = request.json
|
||||
if data is None:
|
||||
return jsonify({"error" : "Parameters missing"}), 400
|
||||
|
||||
return jsonify({"status": "started", "task_id" : task_id}), 200
|
||||
|
||||
@app.route('/repositories/<repo>/gc', methods=['POST'])
|
||||
def gc_repo(repo):
|
||||
"""
|
||||
Initiates a garbage collection (GC) process for a specified Git repository.
|
||||
if not "ssh_port" in data:
|
||||
data["ssh_port"] = 22
|
||||
|
||||
This endpoint triggers an asynchronous GC task for the given repository.
|
||||
The task is submitted to an executor, and a unique task ID is generated
|
||||
and returned to the client.
|
||||
|
||||
Args:
|
||||
repo (str): The name of the repository to perform GC on.
|
||||
future = executor.submit(do_repo_backup, repo, data)
|
||||
task_id = str(uuid.uuid4())
|
||||
tasks[task_id] = future
|
||||
|
||||
Returns:
|
||||
Response: A JSON response containing the status of the request and
|
||||
a unique task ID if the repository is found, or an error
|
||||
message if the repository is not found.
|
||||
"""
|
||||
repo_path = os.path.join(repositories_base_path, repo + ".git")
|
||||
if not os.path.isdir(repo_path):
|
||||
return jsonify({"error": "Repository not found"}), 404
|
||||
return jsonify({"status": "started", "task_id" : task_id}), 200
|
||||
|
||||
future = executor.submit(do_repo_gc, repo)
|
||||
task_id = str(uuid.uuid4())
|
||||
tasks[task_id] = future
|
||||
@git_ns.route('/oggit/v1/repositories/<repo>/compact', methods=['POST'])
|
||||
class GitRepoCompact(Resource):
|
||||
def post(self, repo):
|
||||
"""
|
||||
Initiates a garbage collection (GC) process for a specified Git repository.
|
||||
|
||||
return jsonify({"status": "started", "task_id" : task_id}), 200
|
||||
This endpoint triggers an asynchronous GC task for the given repository.
|
||||
The task is submitted to an executor, and a unique task ID is generated
|
||||
and returned to the client.
|
||||
|
||||
Args:
|
||||
repo (str): The name of the repository to perform GC on.
|
||||
|
||||
@app.route('/tasks/<task_id>/status')
|
||||
def tasks_status(task_id):
|
||||
"""
|
||||
Endpoint to check the status of a specific task.
|
||||
Returns:
|
||||
Response: A JSON response containing the status of the request and
|
||||
a unique task ID if the repository is found, or an error
|
||||
message if the repository is not found.
|
||||
"""
|
||||
repo_path = os.path.join(REPOSITORIES_BASE_PATH, repo + ".git")
|
||||
if not os.path.isdir(repo_path):
|
||||
return jsonify({"error": "Repository not found"}), 404
|
||||
|
||||
Args:
|
||||
task_id (str): The unique identifier of the task.
|
||||
future = executor.submit(do_repo_gc, repo)
|
||||
task_id = str(uuid.uuid4())
|
||||
tasks[task_id] = future
|
||||
|
||||
Returns:
|
||||
Response: A JSON response containing the status of the task.
|
||||
- If the task is not found, returns a 404 error with an error message.
|
||||
- If the task is completed, returns a 200 status with the result.
|
||||
- If the task is still in progress, returns a 202 status indicating the task is in progress.
|
||||
"""
|
||||
if not task_id in tasks:
|
||||
return jsonify({"error": "Task not found"}), 404
|
||||
return jsonify({"status": "started", "task_id" : task_id}), 200
|
||||
|
||||
future = tasks[task_id]
|
||||
|
||||
if future.done():
|
||||
result = future.result()
|
||||
return jsonify({"status" : "completed", "result" : result}), 200
|
||||
else:
|
||||
return jsonify({"status" : "in progress"}), 202
|
||||
@git_ns.route('/oggit/v1/tasks/<task_id>/status')
|
||||
class GitTaskStatus(Resource):
|
||||
def get(self, task_id):
|
||||
"""
|
||||
Endpoint to check the status of a specific task.
|
||||
|
||||
Args:
|
||||
task_id (str): The unique identifier of the task.
|
||||
|
||||
Returns:
|
||||
Response: A JSON response containing the status of the task.
|
||||
- If the task is not found, returns a 404 error with an error message.
|
||||
- If the task is completed, returns a 200 status with the result.
|
||||
- If the task is still in progress, returns a 202 status indicating the task is in progress.
|
||||
"""
|
||||
if not task_id in tasks:
|
||||
return jsonify({"error": "Task not found"}), 404
|
||||
|
||||
@app.route('/repositories/<repo>', methods=['DELETE'])
|
||||
def delete_repo(repo):
|
||||
"""
|
||||
Deletes a Git repository.
|
||||
future = tasks[task_id]
|
||||
|
||||
This endpoint deletes a Git repository specified by the `repo` parameter.
|
||||
If the repository does not exist, it returns a 404 error with a message
|
||||
indicating that the repository was not found. If the repository is successfully
|
||||
deleted, it returns a 200 status with a message indicating that the repository
|
||||
was deleted.
|
||||
if future.done():
|
||||
result = future.result()
|
||||
return jsonify({"status" : "completed", "result" : result}), 200
|
||||
else:
|
||||
return jsonify({"status" : "in progress"}), 202
|
||||
|
||||
Args:
|
||||
repo (str): The name of the repository to delete.
|
||||
|
||||
Returns:
|
||||
Response: A JSON response with a status message and the appropriate HTTP status code.
|
||||
"""
|
||||
repo_path = os.path.join(repositories_base_path, repo + ".git")
|
||||
if not os.path.isdir(repo_path):
|
||||
return jsonify({"error": "Repository not found"}), 404
|
||||
|
||||
@git_ns.route('/oggit/v1/repositories/<repo>', methods=['DELETE'])
|
||||
class GitRepo(Resource):
|
||||
def delete(self, repo):
|
||||
"""
|
||||
Deletes a Git repository.
|
||||
|
||||
shutil.rmtree(repo_path)
|
||||
return jsonify({"status": "Repository deleted"}), 200
|
||||
This endpoint deletes a Git repository specified by the `repo` parameter.
|
||||
If the repository does not exist, it returns a 404 error with a message
|
||||
indicating that the repository was not found. If the repository is successfully
|
||||
deleted, it returns a 200 status with a message indicating that the repository
|
||||
was deleted.
|
||||
|
||||
Args:
|
||||
repo (str): The name of the repository to delete.
|
||||
|
||||
Returns:
|
||||
Response: A JSON response with a status message and the appropriate HTTP status code.
|
||||
"""
|
||||
repo_path = os.path.join(REPOSITORIES_BASE_PATH, repo + ".git")
|
||||
if not os.path.isdir(repo_path):
|
||||
return jsonify({"error": "Repository not found"}), 404
|
||||
|
||||
|
||||
@app.route('/repositories/<repo>/branches')
|
||||
def get_repository_branches(repo):
|
||||
"""
|
||||
Retrieve the list of branches for a given repository.
|
||||
shutil.rmtree(repo_path)
|
||||
return jsonify({"status": "Repository deleted"}), 200
|
||||
|
||||
Args:
|
||||
repo (str): The name of the repository.
|
||||
|
||||
Returns:
|
||||
Response: A JSON response containing a list of branch names or an error message if the repository is not found.
|
||||
- 200: A JSON object with a "branches" key containing a list of branch names.
|
||||
- 404: A JSON object with an "error" key containing the message "Repository not found" if the repository does not exist.
|
||||
"""
|
||||
repo_path = os.path.join(repositories_base_path, repo + ".git")
|
||||
if not os.path.isdir(repo_path):
|
||||
return jsonify({"error": "Repository not found"}), 404
|
||||
|
||||
gitRepo = git.Repo(repo_path)
|
||||
|
||||
branches = []
|
||||
for branch in gitRepo.branches:
|
||||
branches = branches + [branch.name]
|
||||
@git_ns.route('/oggit/v1/repositories/<repo>/branches')
|
||||
class GitRepoBranches(Resource):
|
||||
def get(self, repo):
|
||||
"""
|
||||
Retrieve the list of branches for a given repository.
|
||||
|
||||
Args:
|
||||
repo (str): The name of the repository.
|
||||
|
||||
return jsonify({
|
||||
"branches": branches
|
||||
})
|
||||
Returns:
|
||||
Response: A JSON response containing a list of branch names or an error message if the repository is not found.
|
||||
- 200: A JSON object with a "branches" key containing a list of branch names.
|
||||
- 404: A JSON object with an "error" key containing the message "Repository not found" if the repository does not exist.
|
||||
"""
|
||||
repo_path = os.path.join(REPOSITORIES_BASE_PATH, repo + ".git")
|
||||
if not os.path.isdir(repo_path):
|
||||
return jsonify({"error": "Repository not found"}), 404
|
||||
|
||||
git_repo = git.Repo(repo_path)
|
||||
|
||||
branches = []
|
||||
for branch in git_repo.branches:
|
||||
branches = branches + [branch.name]
|
||||
|
||||
|
||||
@app.route('/health')
|
||||
def health_check():
|
||||
"""
|
||||
Health check endpoint.
|
||||
return jsonify({
|
||||
"branches": branches
|
||||
})
|
||||
|
||||
This endpoint returns a JSON response indicating the health status of the application.
|
||||
|
||||
Returns:
|
||||
Response: A JSON response with a status key set to "OK". Currently it always returns
|
||||
a successful value, but this endpoint can still be used to check that the API is
|
||||
active and functional.
|
||||
|
||||
"""
|
||||
return jsonify({
|
||||
"status": "OK"
|
||||
})
|
||||
|
||||
@git_ns.route('/health')
|
||||
class GitHealth(Resource):
|
||||
def get(self):
|
||||
"""
|
||||
Health check endpoint.
|
||||
|
||||
This endpoint returns a JSON response indicating the health status of the application.
|
||||
|
||||
Returns:
|
||||
Response: A JSON response with a status key set to "OK". Currently it always returns
|
||||
a successful value, but this endpoint can still be used to check that the API is
|
||||
active and functional.
|
||||
|
||||
"""
|
||||
return {
|
||||
"status": "OK"
|
||||
}
|
||||
|
||||
@git_ns.route('/status')
|
||||
class GitStatus(Resource):
|
||||
def get(self):
|
||||
"""
|
||||
Status check endpoint.
|
||||
|
||||
This endpoint returns a JSON response indicating the status of the application.
|
||||
|
||||
Returns:
|
||||
Response: A JSON response with status information
|
||||
|
||||
"""
|
||||
return {
|
||||
"uptime" : time.time() - start_time,
|
||||
"active_tasks" : len(tasks)
|
||||
}
|
||||
|
||||
|
||||
api.add_namespace(git_ns)
|
||||
|
||||
|
||||
|
||||
# Run the Flask app
|
||||
if __name__ == '__main__':
|
||||
print(f"Map: {app.url_map}")
|
||||
app.run(debug=True, host='0.0.0.0')
|
||||
|
||||
|
||||
|
|
|
@ -0,0 +1,34 @@
|
|||
aniso8601==9.0.1
|
||||
attrs==24.2.0
|
||||
bcrypt==4.2.0
|
||||
blinker==1.8.2
|
||||
cffi==1.17.1
|
||||
click==8.1.7
|
||||
cryptography==43.0.1
|
||||
dataclasses==0.6
|
||||
flasgger==0.9.7.1
|
||||
Flask==3.0.3
|
||||
Flask-Executor==1.0.0
|
||||
flask-restx==1.3.0
|
||||
gitdb==4.0.11
|
||||
GitPython==3.1.43
|
||||
importlib_resources==6.4.5
|
||||
itsdangerous==2.2.0
|
||||
Jinja2==3.1.4
|
||||
jsonschema==4.23.0
|
||||
jsonschema-specifications==2024.10.1
|
||||
libarchive-c==5.1
|
||||
MarkupSafe==3.0.1
|
||||
mistune==3.0.2
|
||||
packaging==24.1
|
||||
paramiko==3.5.0
|
||||
pycparser==2.22
|
||||
PyNaCl==1.5.0
|
||||
pytz==2024.2
|
||||
PyYAML==6.0.2
|
||||
referencing==0.35.1
|
||||
rpds-py==0.20.0
|
||||
six==1.16.0
|
||||
smmap==5.0.1
|
||||
termcolor==2.5.0
|
||||
Werkzeug==3.0.4
|
|
@ -0,0 +1,122 @@
|
|||
# GitLib
|
||||
|
||||
The `gitlib.py` is a Python library also usable as a command-line program for testing purposes.
|
||||
|
||||
It contains functions for managing git, and the command-line interface allows executing them without needing to write a program that uses the library.
|
||||
|
||||
## Requirements
|
||||
|
||||
Gitlib is designed to work within an existing OpenGnsys environment. It invokes some OpenGnsys commands internally and reads the parameters passed to the kernel in oglive.
|
||||
|
||||
Therefore, it will not work correctly outside of an oglive environment.
|
||||
|
||||
## Installing Python dependencies
|
||||
|
||||
The code conversion to Python 3 currently requires the packages specified in `requirements.txt`.
|
||||
|
||||
The `venv` module (https://docs.python.org/3/library/venv.html) is used to install Python dependencies, creating an environment isolated from the system.
|
||||
|
||||
**Note:** Ubuntu 24.04 includes most of the required dependencies as packages, but there is no `blkid` package, so it must be installed using pip within a virtual environment.
|
||||
|
||||
Run the following commands:
|
||||
|
||||
```bash
|
||||
sudo apt install -y python3 libarchive-dev libblkid-dev pkg-config libacl1-dev
|
||||
python3 -m venv venvog
|
||||
. venvog/bin/activate
|
||||
python3 -m pip install --upgrade pip
|
||||
pip3 install -r requirements.txt
|
||||
```
|
||||
|
||||
# Usage
|
||||
|
||||
Run with:
|
||||
|
||||
```bash
|
||||
# . venvog/bin/activate
|
||||
# ./gitlib.py
|
||||
```
|
||||
|
||||
In command-line mode, help can be displayed with:
|
||||
|
||||
```bash
|
||||
./gitlib.py --help
|
||||
```
|
||||
|
||||
**Note:** Execute as the `root` user, as `sudo` clears the environment variable changes made by venv. This will likely result in a Python module not found error or program failure due to outdated dependencies.
|
||||
|
||||
**Note:** Commands starting with `--test` exist for internal testing. They are temporary and meant to test specific parts of the code. These may require specific conditions to work and will be removed upon completion of development.
|
||||
|
||||
## Initialize a repository:
|
||||
|
||||
```bash
|
||||
./gitlib.py --init-repo-from /dev/sda2 --repo linux
|
||||
```
|
||||
|
||||
This initializes the 'linux' repository with the content of /mnt/sda2.
|
||||
|
||||
`--repo` specifies the name of one of the repositories configured during the git installation (see git installer).
|
||||
|
||||
The repository is uploaded to the ogrepository, obtained from the boot parameter passed to the kernel.
|
||||
|
||||
## Clone a repository:
|
||||
|
||||
```bash
|
||||
./gitlib.py --clone-repo-to /dev/sda2 --boot-device /dev/sda --repo linux
|
||||
```
|
||||
|
||||
This clones a repository from the ogrepository. The target is a physical device that will be formatted with the necessary file system.
|
||||
|
||||
`--boot-device` specifies the boot device where the bootloader (GRUB or similar) will be installed.
|
||||
|
||||
`--repo` is the repository name contained in ogrepository.
|
||||
|
||||
# Special Considerations for Windows
|
||||
|
||||
## Cloning
|
||||
|
||||
* Windows must be completely shut down, not hibernated. See: https://learn.microsoft.com/en-us/troubleshoot/windows-client/setup-upgrade-and-drivers/disable-and-re-enable-hibernation
|
||||
* Windows must be cleanly shut down using "Shut Down". Gitlib may fail to mount a disk from an improperly shut down system. If so, boot Windows again and shut it down properly.
|
||||
* Disk encryption (BitLocker) cannot be used.
|
||||
|
||||
## Restoration
|
||||
|
||||
Windows uses a structure called BCD (https://learn.microsoft.com/en-us/windows-hardware/manufacture/desktop/bcd-system-store-settings-for-uefi?view=windows-11) to store boot configuration.
|
||||
|
||||
This structure can vary depending on the machine where it is deployed. For this reason, gitlib supports storing multiple versions of the BCD internally and selecting the one corresponding to a specific machine.
|
||||
|
||||
# Documentation
|
||||
|
||||
Python documentation can be generated using utilities such as `pdoc3` (other alternatives are also possible):
|
||||
|
||||
```bash
|
||||
# Install pdoc3
|
||||
pip install --user pdoc3
|
||||
|
||||
# Generate documentation
|
||||
pdoc3 --force --html opengnsys_git_installer.py
|
||||
```
|
||||
|
||||
# Functionality
|
||||
|
||||
## Metadata
|
||||
|
||||
Git cannot store data about extended attributes, sockets, or other special file types. Gitlib stores these in `.opengnsys-metadata` at the root of the repository.
|
||||
|
||||
The data is saved in `jsonl` files, a structure with one JSON object per line. This facilitates partial applications by applying only the necessary lines.
|
||||
|
||||
The following files are included:
|
||||
|
||||
* `acls.jsonl`: ACLs
|
||||
* `empty_directories.jsonl`: Empty directories, as Git cannot store them
|
||||
* `filesystems.json`: Information about file systems: types, sizes, UUIDs
|
||||
* `gitignores.jsonl`: List of .gitignore files (renamed to avoid interfering with Git)
|
||||
* `metadata.json`: General metadata about the repository
|
||||
* `special_files.jsonl`: Special files like sockets
|
||||
* `xattrs.jsonl`: Extended attributes
|
||||
* `renamed.jsonl`: Files renamed to avoid interfering with Git
|
||||
* `unix_permissions.jsonl`: UNIX permissions (not precisely stored by Git)
|
||||
* `ntfs_secaudit.txt`: NTFS security data
|
||||
* `efi_data`: Copy of the EFI (ESP) partition
|
||||
* `efi_data.(id)`: EFI partition copy corresponding to a specific machine
|
||||
* `efi_data.(name)`: EFI partition copy corresponding to a name specified by the administrator.
|
|
@ -6,44 +6,47 @@ de comandos para pruebas.
|
|||
Contiene las funciones de gestión de git, y la parte de línea de comandos permite ejecutarlas sin necesitar escribir un programa que use la librería.
|
||||
|
||||
|
||||
# Instalación de dependencias para python
|
||||
## Requisitos
|
||||
|
||||
La gitlib esta diseñada para funcionar dentro de un entorno opengnsys existente. Invoca algunos de los comandos de opengnsys internamente, y lee los parámetros pasados al kernel en el oglive.
|
||||
|
||||
Por lo tanto, no va a funcionar correctamente fuera de un entorno oglive.
|
||||
|
||||
## Instalación de dependencias para python
|
||||
|
||||
La conversion del código a Python 3 requiere actualmente los paquetes especificados en `requirements.txt`
|
||||
|
||||
Para instalar dependencias de python se usa el modulo venv (https://docs.python.org/3/library/venv.html) que instala todas las dependencias en un entorno independiente del sistema.
|
||||
|
||||
**Nota:** Ubuntu 24.04 tiene la mayoría de las dependencias necesarias como paquetes, pero no hay paquete de `blkid`, por lo cual es necesario usar pip y un virtualenv.
|
||||
|
||||
Ejecutar:
|
||||
|
||||
sudo apt install -y python3 libarchive-dev libblkid-dev pkg-config libacl1-dev
|
||||
python3 -m venv venvog
|
||||
. venvog/bin/activate
|
||||
python3 -m pip install --upgrade pip
|
||||
pip3 install -r requirements.txt
|
||||
|
||||
|
||||
# Uso
|
||||
|
||||
|
||||
## Distribuciones antiguas (18.04)
|
||||
|
||||
sudo apt install -y python3.8 python3.8-venv python3.8-dev python3-venv libarchive-dev libblkid-dev pkg-config libacl1-dev
|
||||
python3.8 -m venv venvog
|
||||
. venvog/bin/activate
|
||||
python3.8 -m pip install --upgrade pip
|
||||
pip3 install -r requirements.txt
|
||||
|
||||
Ejecutar con:
|
||||
|
||||
./gitlib.py
|
||||
# . venvog/bin/activate
|
||||
# ./gitlib.py
|
||||
|
||||
En modo de linea de comando, hay ayuda que se puede ver con:
|
||||
|
||||
./gitlib.py --help
|
||||
|
||||
|
||||
Los comandos que comienzan por `--test` existen para hacer pruebas internas, y existen temporalmente para probar partes especificas del código. Es posible que necesiten condiciones especificas para funcionar, y van a eliminarse al completarse el desarrollo.
|
||||
**Nota:** Ejecutar como usuario `root`, ya que `sudo` borra los cambios a las variables de entorno realizadas por venv. El resultado probable es un error de falta de módulos de Python, o un fallo del programa por usar dependencias demasiado antiguas.
|
||||
|
||||
## Uso
|
||||
|
||||
**Nota:** Preferiblemente ejecutar como `root`, ya que `sudo` borra los cambios a las variables de entorno realizadas por venv. El resultado probable es un error de falta de módulos de Python, o un fallo del programa por usar dependencias demasiado antiguas.
|
||||
|
||||
# . venv/bin/activate
|
||||
# ./opengnsys_git_installer.py
|
||||
**Nota:** Los comandos que comienzan por `--test` existen para hacer pruebas internas, y existen temporalmente para probar partes especificas del código. Es posible que necesiten condiciones especificas para funcionar, y van a eliminarse al completarse el desarrollo.
|
||||
|
||||
|
||||
### Inicializar un repositorio:
|
||||
## Inicializar un repositorio:
|
||||
|
||||
./gitlib.py --init-repo-from /dev/sda2 --repo linux
|
||||
|
||||
|
@ -54,7 +57,7 @@ Esto inicializa el repositorio 'linux' con el contenido /mnt/sda2.
|
|||
|
||||
El repositorio de sube al ogrepository, que se obtiene del parámetro de arranque pasado al kernel.
|
||||
|
||||
### Clonar un repositorio:
|
||||
## Clonar un repositorio:
|
||||
|
||||
./gitlib.py --clone-repo-to /dev/sda2 --boot-device /dev/sda --repo linux
|
||||
|
||||
|
@ -64,6 +67,50 @@ Esto clona un repositorio del ogrepository. El destino es un dispositivo físico
|
|||
|
||||
`--repo` es el nombre de repositorio contenido en ogrepository.
|
||||
|
||||
# Consideraciones especiales para Windows
|
||||
|
||||
## Clonación
|
||||
|
||||
* Windows debe haber sido apagado completamente, sin hibernar. Ver https://learn.microsoft.com/en-us/troubleshoot/windows-client/setup-upgrade-and-drivers/disable-and-re-enable-hibernation
|
||||
* Windows debe haber sido apagado limpiamente, usando "Apagar sistema". Es posible que gitlib no pueda montar un disco de un sistema apagado incorrectamente. En ese caso hay que volver a iniciar Windows, y apagarlo.
|
||||
* No se puede usar cifrado de disco (Bitlocker)
|
||||
|
||||
## Restauración
|
||||
|
||||
Windows usa una estructura llamada BCD (https://learn.microsoft.com/en-us/windows-hardware/manufacture/desktop/bcd-system-store-settings-for-uefi?view=windows-11) para almacenar la configuración de arranque.
|
||||
|
||||
La estructura puede variar dependiendo en que maquina se despliegue, por esto gitlib soporta almacenar internamente multiples versiones del BCD, y elegir el correspondiente a una maquina especifica.
|
||||
|
||||
## Identificadores de disco
|
||||
|
||||
El arranque de Windows dependiendo de como esté configurado por Windows puede referirse
|
||||
a UUIDs de particiones y discos cuando se usa particionado GPT.
|
||||
|
||||
El código actual conserva los UUIDs y los restaura al clonar.
|
||||
|
||||
## BCDs específicos
|
||||
|
||||
Los datos de arranque de Windows se guardan en `.opengsnys-metadata/efi_data`. Es posible incluir versiones adicionales en caso necesario. Se hace creando un directorio adicional con el nombre `efi_data.(id)`, donde id es un número de serie obtenido con el comando `/usr/sbin/dmidecode -s system-uuid`.
|
||||
|
||||
Por ejemplo:
|
||||
|
||||
```
|
||||
# Obtener ID único del equipo
|
||||
|
||||
dmidecode -s system-uuid
|
||||
a64cc65b-12a6-42ef-8182-5ae4832e9f19
|
||||
|
||||
# Copiar la partición EFI al directorio correspondiente a esa máquina particular
|
||||
mkdir /mnt/sda3/.opengnsys-metadata/efi_data.a64cc65b-12a6-42ef-8182-5ae4832e9f19
|
||||
cp -Rdpv /mnt/sda1/* /mnt/sda3/.opengnsys-metadata/efi_data.a64cc65b-12a6-42ef-8182-5ae4832e9f19
|
||||
|
||||
# commit
|
||||
```
|
||||
|
||||
Con esto, al desplegar el repo, para la máquina a64cc65b-12a6-42ef-8182-5ae4832e9f19 se va a usar su propia configuración de arranque, en vez de la general.
|
||||
|
||||
|
||||
|
||||
# Documentación
|
||||
|
||||
Se puede generar documentación de Python con una utilidad como pdoc3 (hay multiples alternativas posibles):
|
||||
|
@ -76,9 +123,6 @@ Se puede generar documentación de Python con una utilidad como pdoc3 (hay multi
|
|||
|
||||
# Funcionamiento
|
||||
|
||||
## Requisitos
|
||||
|
||||
La gitlib esta diseñada para funcionar dentro de un entorno opengnsys existente. Invoca algunos de los comandos de opengnsys internamente, y lee los parámetros pasados al kernel en el oglive.
|
||||
|
||||
|
||||
## Metadatos
|
||||
|
@ -97,3 +141,9 @@ Existen estos archivos:
|
|||
* `metadata.json`: Metadatos generales acerca del repositorio
|
||||
* `special_files.jsonl`: Archivos especiales como sockets
|
||||
* `xattrs.jsonl`: Atributos extendidos
|
||||
* `renamed.jsonl`: Archivos renombrados para no interferir con Git
|
||||
* `unix_permissions.jsonl`: Permisos UNIX (Git no los almacena exactamente)
|
||||
* `ntfs_secaudit.txt`: Datos de seguridad de NTFS
|
||||
* `efi_data`: Copia de la partición EFI (ESP)
|
||||
* `efi_data.(id)`: Copia de la partición EFI correspondiente a un equipo especifico.
|
||||
* `efi_data.(nombre)`: Copia de la partición EFI correspondiente a un nombre especificado por el administrador.
|
||||
|
|
|
@ -0,0 +1,345 @@
|
|||
#!/usr/bin/env python3
|
||||
import hivex
|
||||
import argparse
|
||||
import struct
|
||||
|
||||
from hivex import Hivex
|
||||
from hivex.hive_types import *
|
||||
|
||||
|
||||
# Docs:
|
||||
#
|
||||
# https://www.geoffchappell.com/notes/windows/boot/bcd/objects.htm
|
||||
# https://learn.microsoft.com/en-us/previous-versions/windows/desktop/bcd/bcdbootmgrelementtypes
|
||||
|
||||
#print(f"Root: {root}")
|
||||
|
||||
|
||||
BCD_Enumerations = {
|
||||
"BcdLibraryDevice_ApplicationDevice" : 0x11000001,
|
||||
"BcdLibraryString_ApplicationPath" : 0x12000002,
|
||||
"BcdLibraryString_Description" : 0x12000004,
|
||||
"BcdLibraryString_PreferredLocale" : 0x12000005,
|
||||
"BcdLibraryObjectList_InheritedObjects" : 0x14000006,
|
||||
"BcdLibraryInteger_TruncatePhysicalMemory" : 0x15000007,
|
||||
"BcdLibraryObjectList_RecoverySequence" : 0x14000008,
|
||||
"BcdLibraryBoolean_AutoRecoveryEnabled" : 0x16000009,
|
||||
"BcdLibraryIntegerList_BadMemoryList" : 0x1700000a,
|
||||
"BcdLibraryBoolean_AllowBadMemoryAccess" : 0x1600000b,
|
||||
"BcdLibraryInteger_FirstMegabytePolicy" : 0x1500000c,
|
||||
"BcdLibraryInteger_RelocatePhysicalMemory" : 0x1500000D,
|
||||
"BcdLibraryInteger_AvoidLowPhysicalMemory" : 0x1500000E,
|
||||
"BcdLibraryBoolean_DebuggerEnabled" : 0x16000010,
|
||||
"BcdLibraryInteger_DebuggerType" : 0x15000011,
|
||||
"BcdLibraryInteger_SerialDebuggerPortAddress" : 0x15000012,
|
||||
"BcdLibraryInteger_SerialDebuggerPort" : 0x15000013,
|
||||
"BcdLibraryInteger_SerialDebuggerBaudRate" : 0x15000014,
|
||||
"BcdLibraryInteger_1394DebuggerChannel" : 0x15000015,
|
||||
"BcdLibraryString_UsbDebuggerTargetName" : 0x12000016,
|
||||
"BcdLibraryBoolean_DebuggerIgnoreUsermodeExceptions" : 0x16000017,
|
||||
"BcdLibraryInteger_DebuggerStartPolicy" : 0x15000018,
|
||||
"BcdLibraryString_DebuggerBusParameters" : 0x12000019,
|
||||
"BcdLibraryInteger_DebuggerNetHostIP" : 0x1500001A,
|
||||
"BcdLibraryInteger_DebuggerNetPort" : 0x1500001B,
|
||||
"BcdLibraryBoolean_DebuggerNetDhcp" : 0x1600001C,
|
||||
"BcdLibraryString_DebuggerNetKey" : 0x1200001D,
|
||||
"BcdLibraryBoolean_EmsEnabled" : 0x16000020,
|
||||
"BcdLibraryInteger_EmsPort" : 0x15000022,
|
||||
"BcdLibraryInteger_EmsBaudRate" : 0x15000023,
|
||||
"BcdLibraryString_LoadOptionsString" : 0x12000030,
|
||||
"BcdLibraryBoolean_DisplayAdvancedOptions" : 0x16000040,
|
||||
"BcdLibraryBoolean_DisplayOptionsEdit" : 0x16000041,
|
||||
"BcdLibraryDevice_BsdLogDevice" : 0x11000043,
|
||||
"BcdLibraryString_BsdLogPath" : 0x12000044,
|
||||
"BcdLibraryBoolean_GraphicsModeDisabled" : 0x16000046,
|
||||
"BcdLibraryInteger_ConfigAccessPolicy" : 0x15000047,
|
||||
"BcdLibraryBoolean_DisableIntegrityChecks" : 0x16000048,
|
||||
"BcdLibraryBoolean_AllowPrereleaseSignatures" : 0x16000049,
|
||||
"BcdLibraryString_FontPath" : 0x1200004A,
|
||||
"BcdLibraryInteger_SiPolicy" : 0x1500004B,
|
||||
"BcdLibraryInteger_FveBandId" : 0x1500004C,
|
||||
"BcdLibraryBoolean_ConsoleExtendedInput" : 0x16000050,
|
||||
"BcdLibraryInteger_GraphicsResolution" : 0x15000052,
|
||||
"BcdLibraryBoolean_RestartOnFailure" : 0x16000053,
|
||||
"BcdLibraryBoolean_GraphicsForceHighestMode" : 0x16000054,
|
||||
"BcdLibraryBoolean_IsolatedExecutionContext" : 0x16000060,
|
||||
"BcdLibraryBoolean_BootUxDisable" : 0x1600006C,
|
||||
"BcdLibraryBoolean_BootShutdownDisabled" : 0x16000074,
|
||||
"BcdLibraryIntegerList_AllowedInMemorySettings" : 0x17000077,
|
||||
"BcdLibraryBoolean_ForceFipsCrypto" : 0x16000079,
|
||||
|
||||
|
||||
"BcdBootMgrObjectList_DisplayOrder" : 0x24000001,
|
||||
"BcdBootMgrObjectList_BootSequence" : 0x24000002,
|
||||
"BcdBootMgrObject_DefaultObject" : 0x23000003,
|
||||
"BcdBootMgrInteger_Timeout" : 0x25000004,
|
||||
"BcdBootMgrBoolean_AttemptResume" : 0x26000005,
|
||||
"BcdBootMgrObject_ResumeObject" : 0x23000006,
|
||||
"BcdBootMgrObjectList_ToolsDisplayOrder" : 0x24000010,
|
||||
"BcdBootMgrBoolean_DisplayBootMenu" : 0x26000020,
|
||||
"BcdBootMgrBoolean_NoErrorDisplay" : 0x26000021,
|
||||
"BcdBootMgrDevice_BcdDevice" : 0x21000022,
|
||||
"BcdBootMgrString_BcdFilePath" : 0x22000023,
|
||||
"BcdBootMgrBoolean_ProcessCustomActionsFirst" : 0x26000028,
|
||||
"BcdBootMgrIntegerList_CustomActionsList" : 0x27000030,
|
||||
"BcdBootMgrBoolean_PersistBootSequence" : 0x26000031,
|
||||
|
||||
"BcdDeviceInteger_RamdiskImageOffset" : 0x35000001,
|
||||
"BcdDeviceInteger_TftpClientPort" : 0x35000002,
|
||||
"BcdDeviceInteger_SdiDevice" : 0x31000003,
|
||||
"BcdDeviceInteger_SdiPath" : 0x32000004,
|
||||
"BcdDeviceInteger_RamdiskImageLength" : 0x35000005,
|
||||
"BcdDeviceBoolean_RamdiskExportAsCd" : 0x36000006,
|
||||
"BcdDeviceInteger_RamdiskTftpBlockSize" : 0x36000007,
|
||||
"BcdDeviceInteger_RamdiskTftpWindowSize" : 0x36000008,
|
||||
"BcdDeviceBoolean_RamdiskMulticastEnabled" : 0x36000009,
|
||||
"BcdDeviceBoolean_RamdiskMulticastTftpFallback" : 0x3600000A,
|
||||
"BcdDeviceBoolean_RamdiskTftpVarWindow" : 0x3600000B,
|
||||
|
||||
"BcdMemDiagInteger_PassCount" : 0x25000001,
|
||||
"BcdMemDiagInteger_FailureCount" : 0x25000003,
|
||||
|
||||
"Reserved1" : 0x21000001,
|
||||
"Reserved2" : 0x22000002,
|
||||
"BcdResumeBoolean_UseCustomSettings" : 0x26000003,
|
||||
"BcdResumeDevice_AssociatedOsDevice" : 0x21000005,
|
||||
"BcdResumeBoolean_DebugOptionEnabled" : 0x26000006,
|
||||
"BcdResumeInteger_BootMenuPolicy" : 0x25000008,
|
||||
|
||||
"BcdOSLoaderDevice_OSDevice" : 0x21000001,
|
||||
"BcdOSLoaderString_SystemRoot" : 0x22000002,
|
||||
"BcdOSLoaderObject_AssociatedResumeObject" : 0x23000003,
|
||||
"BcdOSLoaderBoolean_DetectKernelAndHal" : 0x26000010,
|
||||
"BcdOSLoaderString_KernelPath" : 0x22000011,
|
||||
"BcdOSLoaderString_HalPath" : 0x22000012,
|
||||
"BcdOSLoaderString_DbgTransportPath" : 0x22000013,
|
||||
"BcdOSLoaderInteger_NxPolicy" : 0x25000020,
|
||||
"BcdOSLoaderInteger_PAEPolicy" : 0x25000021,
|
||||
"BcdOSLoaderBoolean_WinPEMode" : 0x26000022,
|
||||
"BcdOSLoaderBoolean_DisableCrashAutoReboot" : 0x26000024,
|
||||
"BcdOSLoaderBoolean_UseLastGoodSettings" : 0x26000025,
|
||||
"BcdOSLoaderBoolean_AllowPrereleaseSignatures" : 0x26000027,
|
||||
"BcdOSLoaderBoolean_NoLowMemory" : 0x26000030,
|
||||
"BcdOSLoaderInteger_RemoveMemory" : 0x25000031,
|
||||
"BcdOSLoaderInteger_IncreaseUserVa" : 0x25000032,
|
||||
"BcdOSLoaderBoolean_UseVgaDriver" : 0x26000040,
|
||||
"BcdOSLoaderBoolean_DisableBootDisplay" : 0x26000041,
|
||||
"BcdOSLoaderBoolean_DisableVesaBios" : 0x26000042,
|
||||
"BcdOSLoaderBoolean_DisableVgaMode" : 0x26000043,
|
||||
"BcdOSLoaderInteger_ClusterModeAddressing" : 0x25000050,
|
||||
"BcdOSLoaderBoolean_UsePhysicalDestination" : 0x26000051,
|
||||
"BcdOSLoaderInteger_RestrictApicCluster" : 0x25000052,
|
||||
"BcdOSLoaderBoolean_UseLegacyApicMode" : 0x26000054,
|
||||
"BcdOSLoaderInteger_X2ApicPolicy" : 0x25000055,
|
||||
"BcdOSLoaderBoolean_UseBootProcessorOnly" : 0x26000060,
|
||||
"BcdOSLoaderInteger_NumberOfProcessors" : 0x25000061,
|
||||
"BcdOSLoaderBoolean_ForceMaximumProcessors" : 0x26000062,
|
||||
"BcdOSLoaderBoolean_ProcessorConfigurationFlags" : 0x25000063,
|
||||
"BcdOSLoaderBoolean_MaximizeGroupsCreated" : 0x26000064,
|
||||
"BcdOSLoaderBoolean_ForceGroupAwareness" : 0x26000065,
|
||||
"BcdOSLoaderInteger_GroupSize" : 0x25000066,
|
||||
"BcdOSLoaderInteger_UseFirmwarePciSettings" : 0x26000070,
|
||||
"BcdOSLoaderInteger_MsiPolicy" : 0x25000071,
|
||||
"BcdOSLoaderInteger_SafeBoot" : 0x25000080,
|
||||
"BcdOSLoaderBoolean_SafeBootAlternateShell" : 0x26000081,
|
||||
"BcdOSLoaderBoolean_BootLogInitialization" : 0x26000090,
|
||||
"BcdOSLoaderBoolean_VerboseObjectLoadMode" : 0x26000091,
|
||||
"BcdOSLoaderBoolean_KernelDebuggerEnabled" : 0x260000a0,
|
||||
"BcdOSLoaderBoolean_DebuggerHalBreakpoint" : 0x260000a1,
|
||||
"BcdOSLoaderBoolean_UsePlatformClock" : 0x260000A2,
|
||||
"BcdOSLoaderBoolean_ForceLegacyPlatform" : 0x260000A3,
|
||||
"BcdOSLoaderInteger_TscSyncPolicy" : 0x250000A6,
|
||||
"BcdOSLoaderBoolean_EmsEnabled" : 0x260000b0,
|
||||
"BcdOSLoaderInteger_DriverLoadFailurePolicy" : 0x250000c1,
|
||||
"BcdOSLoaderInteger_BootMenuPolicy" : 0x250000C2,
|
||||
"BcdOSLoaderBoolean_AdvancedOptionsOneTime" : 0x260000C3,
|
||||
"BcdOSLoaderInteger_BootStatusPolicy" : 0x250000E0,
|
||||
"BcdOSLoaderBoolean_DisableElamDrivers" : 0x260000E1,
|
||||
"BcdOSLoaderInteger_HypervisorLaunchType" : 0x250000F0,
|
||||
"BcdOSLoaderBoolean_HypervisorDebuggerEnabled" : 0x260000F2,
|
||||
"BcdOSLoaderInteger_HypervisorDebuggerType" : 0x250000F3,
|
||||
"BcdOSLoaderInteger_HypervisorDebuggerPortNumber" : 0x250000F4,
|
||||
"BcdOSLoaderInteger_HypervisorDebuggerBaudrate" : 0x250000F5,
|
||||
"BcdOSLoaderInteger_HypervisorDebugger1394Channel" : 0x250000F6,
|
||||
"BcdOSLoaderInteger_BootUxPolicy" : 0x250000F7,
|
||||
"BcdOSLoaderString_HypervisorDebuggerBusParams" : 0x220000F9,
|
||||
"BcdOSLoaderInteger_HypervisorNumProc" : 0x250000FA,
|
||||
"BcdOSLoaderInteger_HypervisorRootProcPerNode" : 0x250000FB,
|
||||
"BcdOSLoaderBoolean_HypervisorUseLargeVTlb" : 0x260000FC,
|
||||
"BcdOSLoaderInteger_HypervisorDebuggerNetHostIp" : 0x250000FD,
|
||||
"BcdOSLoaderInteger_HypervisorDebuggerNetHostPort" : 0x250000FE,
|
||||
"BcdOSLoaderInteger_TpmBootEntropyPolicy" : 0x25000100,
|
||||
"BcdOSLoaderString_HypervisorDebuggerNetKey" : 0x22000110,
|
||||
"BcdOSLoaderBoolean_HypervisorDebuggerNetDhcp" : 0x26000114,
|
||||
"BcdOSLoaderInteger_HypervisorIommuPolicy" : 0x25000115,
|
||||
"BcdOSLoaderInteger_XSaveDisable" : 0x2500012b
|
||||
}
|
||||
|
||||
|
||||
def format_value(bcd, bcd_value):
|
||||
|
||||
name = bcd.value_key(bcd_value)
|
||||
(type, length) = bcd.value_type(bcd_value)
|
||||
|
||||
typename = ""
|
||||
str_value = ""
|
||||
if type == REG_SZ:
|
||||
typename = "SZ"
|
||||
str_value = bcd.value_string(bcd_value)
|
||||
elif type == REG_DWORD:
|
||||
typename = "DWORD"
|
||||
dval = bcd.value_dword(bcd_value)
|
||||
|
||||
str_value = hex(dval) + " (" + str(bcd.value_dword(bcd_value)) + ")"
|
||||
elif type == REG_BINARY:
|
||||
typename = "BIN"
|
||||
(length, value) = bcd.value_value(bcd_value)
|
||||
str_value = value.hex()
|
||||
elif type == REG_DWORD_BIG_ENDIAN:
|
||||
typename = "DWORD_BE"
|
||||
elif type == REG_EXPAND_SZ:
|
||||
typename = "EXPAND SZ"
|
||||
elif type == REG_FULL_RESOURCE_DESCRIPTOR:
|
||||
typename = "RES DESC"
|
||||
elif type == REG_LINK:
|
||||
typename = "LINK"
|
||||
elif type == REG_MULTI_SZ:
|
||||
typename = "MULTISZ"
|
||||
(length, str_value) = bcd.value_value(bcd_value)
|
||||
str_value = str_value.decode('utf-16le')
|
||||
str_value = str_value.replace("\0", ";")
|
||||
#value = ";".join("\0".split(value))
|
||||
elif type == REG_NONE:
|
||||
typename = "NONE"
|
||||
elif type == REG_QWORD:
|
||||
typename = "QWORD"
|
||||
elif type == REG_RESOURCE_LIST:
|
||||
typename = "RES LIST"
|
||||
elif type == REG_RESOURCE_REQUIREMENTS_LIST:
|
||||
typename = "REQ LIST"
|
||||
else:
|
||||
typename = str(type)
|
||||
str_value = "???"
|
||||
|
||||
|
||||
return (typename, length, str_value)
|
||||
|
||||
def dump_all(root, depth = 0):
|
||||
|
||||
padding = "\t" * depth
|
||||
|
||||
children = bcd.node_children(root)
|
||||
|
||||
if len(children) > 0:
|
||||
|
||||
for child in children:
|
||||
name = bcd.node_name(child)
|
||||
print(f"{padding}{name}")
|
||||
|
||||
dump_all(child, depth + 1)
|
||||
# print(f"Child: {child}")
|
||||
|
||||
#print(f"Values: {num_vals}")
|
||||
return
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
values = bcd.node_values(root)
|
||||
#print(f"Value list: {values}")
|
||||
|
||||
for v in values:
|
||||
(type_name, length, str_value) = format_value(bcd, v)
|
||||
name = bcd.value_key(v)
|
||||
|
||||
print(f"{padding}{name: <16}: [{type_name: <10}]; ({length: < 4}) {str_value}")
|
||||
|
||||
|
||||
class WindowsBCD:
|
||||
def __init__(self, filename):
|
||||
self.filename = filename
|
||||
self.bcd = Hivex(filename)
|
||||
|
||||
def dump(self, root=None, depth = 0):
|
||||
padding = "\t" * depth
|
||||
|
||||
if root is None:
|
||||
root = self.bcd.root()
|
||||
|
||||
children = self.bcd.node_children(root)
|
||||
|
||||
if len(children) > 0:
|
||||
for child in children:
|
||||
name = self.bcd.node_name(child)
|
||||
print(f"{padding}{name}")
|
||||
|
||||
self.dump(child, depth + 1)
|
||||
return
|
||||
|
||||
values = self.bcd.node_values(root)
|
||||
|
||||
for v in values:
|
||||
(type_name, length, str_value) = format_value(self.bcd, v)
|
||||
name = self.bcd.value_key(v)
|
||||
|
||||
print(f"{padding}{name: <16}: [{type_name: <10}]; ({length: < 4}) {str_value}")
|
||||
|
||||
def list(self):
|
||||
root = self.bcd.root()
|
||||
objects = self.bcd.node_get_child(root, "Objects")
|
||||
|
||||
for child in self.bcd.node_children(objects):
|
||||
entry_id = self.bcd.node_name(child)
|
||||
|
||||
elements = self.bcd.node_get_child(child, "Elements")
|
||||
description_entry = self.bcd.node_get_child(elements, "12000004")
|
||||
|
||||
if description_entry:
|
||||
values = self.bcd.node_values(description_entry)
|
||||
if values:
|
||||
(type_name, length, str_value) = format_value(self.bcd, values[0])
|
||||
print(f"{entry_id}: {str_value}")
|
||||
else:
|
||||
print(f"{entry_id}: [no description value!?]")
|
||||
|
||||
|
||||
appdevice_entry = self.bcd.node_get_child(elements, "11000001")
|
||||
|
||||
if appdevice_entry:
|
||||
values = self.bcd.node_values(appdevice_entry)
|
||||
(length, data) = self.bcd.value_value(values[0])
|
||||
hex = data.hex()
|
||||
print(f"LEN: {length}, HEX: {hex}, RAW: {data}")
|
||||
if len(data) > 10:
|
||||
etype = struct.unpack_from('<I', data, offset = 16)
|
||||
print(f"Type: {etype}")
|
||||
|
||||
|
||||
|
||||
else:
|
||||
print(f"{entry_id}: [no description entry 12000004]")
|
||||
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
prog="Windows BCD parser",
|
||||
description="Parses the BCD",
|
||||
)
|
||||
|
||||
parser.add_argument("--db", type=str, metavar='BCD file', help="Database to use")
|
||||
parser.add_argument("--dump", action='store_true', help="Dumps the specified database")
|
||||
parser.add_argument("--list", action='store_true', help="Lists boot entries in the specified database")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
bcdobj = WindowsBCD(args.db)
|
||||
|
||||
if args.dump:
|
||||
# "/home/vadim/opengnsys/winboot/boot-copy/EFI/Microsoft/Boot/BCD"
|
||||
#bcd = Hivex(args.dump)
|
||||
|
||||
#root = bcd.root()
|
||||
#dump_all(root)
|
||||
bcdobj.dump()
|
||||
elif args.list:
|
||||
bcdobj.list()
|
|
@ -0,0 +1,115 @@
|
|||
|
||||
import logging
|
||||
import subprocess
|
||||
import re
|
||||
|
||||
# pylint: disable=locally-disabled, line-too-long, logging-fstring-interpolation, too-many-lines
|
||||
|
||||
|
||||
class DiskLibrary:
|
||||
def __init__(self):
|
||||
self.logger = logging.getLogger("OpengnsysDiskLibrary")
|
||||
self.logger.setLevel(logging.DEBUG)
|
||||
|
||||
def split_device_partition(self, device):
|
||||
"""
|
||||
Parses a device file like /dev/sda3 into the root device (/dev/sda) and partition number (3)
|
||||
|
||||
Args:
|
||||
device (str): Device in /dev
|
||||
|
||||
Returns:
|
||||
[base_device, partno]
|
||||
"""
|
||||
|
||||
r = re.compile("^(.*?)(\\d+)$")
|
||||
m = r.match(device)
|
||||
disk = m.group(1)
|
||||
partno = int(m.group(2))
|
||||
|
||||
self.logger.debug(f"{device} parsed into disk device {disk}, partition {partno}")
|
||||
return (disk, partno)
|
||||
|
||||
def get_disk_json_data(self, device):
|
||||
"""
|
||||
Returns the partition JSON data dump for the entire disk, even if a partition is passed.
|
||||
|
||||
This is specifically in the format used by sfdisk.
|
||||
|
||||
Args:
|
||||
device (str): Block device, eg, /dev/sda3
|
||||
|
||||
Returns:
|
||||
str: JSON dump produced by sfdisk
|
||||
"""
|
||||
(disk, partno) = self.split_device_partition(device)
|
||||
|
||||
result = subprocess.run(["/usr/sbin/sfdisk", "--json", disk], check=True, capture_output=True, encoding='utf-8')
|
||||
return result.stdout.strip()
|
||||
|
||||
def get_disk_uuid(self, device):
|
||||
"""
|
||||
Returns the UUID of the disk itself, if there's a GPT partition table.
|
||||
|
||||
Args:
|
||||
device (str): Block device, eg, /dev/sda3
|
||||
|
||||
Returns:
|
||||
str: UUID
|
||||
"""
|
||||
(disk, partno) = self.split_device_partition(device)
|
||||
|
||||
result = subprocess.run(["/usr/sbin/sfdisk", "--disk-id", disk], check=True, capture_output=True, encoding='utf-8')
|
||||
return result.stdout.strip()
|
||||
|
||||
def set_disk_uuid(self, device, uuid):
|
||||
|
||||
(disk, partno) = self.split_device_partition(device)
|
||||
|
||||
subprocess.run(["/usr/sbin/sfdisk", "--disk-id", disk, uuid], check=True, encoding='utf-8')
|
||||
|
||||
|
||||
def get_partition_uuid(self, device):
|
||||
"""
|
||||
Returns the UUID of the partition, if there's a GPT partition table.
|
||||
|
||||
Args:
|
||||
device (str): Block device, eg, /dev/sda3
|
||||
|
||||
Returns:
|
||||
str: UUID
|
||||
"""
|
||||
|
||||
(disk, partno) = self.split_device_partition(device)
|
||||
|
||||
result = subprocess.run(["/usr/sbin/sfdisk", "--part-uuid", disk, str(partno)], check=True, capture_output=True, encoding='utf-8')
|
||||
return result.stdout.strip()
|
||||
|
||||
def set_partition_uuid(self, device, uuid):
|
||||
(disk, partno) = self.split_device_partition(device)
|
||||
|
||||
subprocess.run(["/usr/sbin/sfdisk", "--part-uuid", disk, str(partno), uuid], check=True, encoding='utf-8')
|
||||
|
||||
def get_partition_type(self, device):
|
||||
"""
|
||||
Returns the type UUID of the partition, if there's a GPT partition table.
|
||||
|
||||
Args:
|
||||
device (str): Block device, eg, /dev/sda3
|
||||
|
||||
Returns:
|
||||
str: UUID
|
||||
"""
|
||||
|
||||
(disk, partno) = self.split_device_partition(device)
|
||||
|
||||
result = subprocess.run(["/usr/sbin/sfdisk", "--part-type", disk, str(partno)], check=True, capture_output=True, encoding='utf-8')
|
||||
return result.stdout.strip()
|
||||
|
||||
def set_partition_type(self, device, uuid):
|
||||
(disk, partno) = self.split_device_partition(device)
|
||||
|
||||
subprocess.run(["/usr/sbin/sfdisk", "--part-type", disk, str(partno), uuid], check=True, encoding='utf-8')
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,544 @@
|
|||
|
||||
import logging
|
||||
import subprocess
|
||||
import os
|
||||
import json
|
||||
import blkid
|
||||
import time
|
||||
|
||||
from ntfs import *
|
||||
|
||||
|
||||
|
||||
# pylint: disable=locally-disabled, line-too-long, logging-fstring-interpolation, too-many-lines
|
||||
|
||||
|
||||
class FilesystemLibrary:
|
||||
def __init__(self, ntfs_implementation = NTFSImplementation.KERNEL):
|
||||
self.logger = logging.getLogger("OpengnsysFilesystemLibrary")
|
||||
self.logger.setLevel(logging.DEBUG)
|
||||
|
||||
self.mounts = {}
|
||||
self.base_mount_path = "/mnt"
|
||||
self.ntfs_implementation = ntfs_implementation
|
||||
|
||||
self.update_mounts()
|
||||
|
||||
def _rmmod(self, module):
|
||||
self.logger.debug("Trying to unload module {module}...")
|
||||
subprocess.run(["/usr/sbin/rmmod", module], check=False)
|
||||
|
||||
def _modprobe(self, module):
|
||||
self.logger.debug("Trying to load module {module}...")
|
||||
subprocess.run(["/usr/sbin/modprobe", module], check=True)
|
||||
|
||||
|
||||
# _parse_mounts
|
||||
def update_mounts(self):
|
||||
"""
|
||||
Update the current mount points by parsing the /proc/mounts file.
|
||||
|
||||
This method reads the /proc/mounts file to gather information about
|
||||
the currently mounted filesystems. It stores this information in a
|
||||
dictionary where the keys are the mount points and the values are
|
||||
dictionaries containing details about each filesystem.
|
||||
|
||||
The details stored for each filesystem include:
|
||||
- device: The device file associated with the filesystem.
|
||||
- mountpoint: The directory where the filesystem is mounted.
|
||||
- type: The type of the filesystem (e.g., ext4, vfat).
|
||||
- options: Mount options associated with the filesystem.
|
||||
- dump_freq: The dump frequency for the filesystem.
|
||||
- passno: The pass number for filesystem checks.
|
||||
|
||||
The method also adds an entry for each mount point with a trailing
|
||||
slash to ensure consistency in accessing the mount points.
|
||||
|
||||
Attributes:
|
||||
mounts (dict): A dictionary where keys are mount points and values
|
||||
are dictionaries containing filesystem details.
|
||||
"""
|
||||
filesystems = {}
|
||||
|
||||
self.logger.debug("Parsing /proc/mounts")
|
||||
|
||||
with open("/proc/mounts", 'r', encoding='utf-8') as mounts:
|
||||
for line in mounts:
|
||||
parts = line.split()
|
||||
data = {}
|
||||
data['device'] = parts[0]
|
||||
data['mountpoint'] = parts[1]
|
||||
data['type'] = parts[2]
|
||||
data['options'] = parts[3]
|
||||
data['dump_freq'] = parts[4]
|
||||
data['passno'] = parts[5]
|
||||
|
||||
filesystems[data["mountpoint"]] = data
|
||||
filesystems[data["mountpoint"] + "/"] = data
|
||||
|
||||
self.mounts = filesystems
|
||||
|
||||
def find_mountpoint(self, device):
|
||||
"""
|
||||
Find the mount point for a given device.
|
||||
|
||||
This method checks if the specified device is currently mounted and returns
|
||||
the corresponding mount point if it is found.
|
||||
|
||||
Args:
|
||||
device (str): The path to the device to check.
|
||||
|
||||
Returns:
|
||||
str or None: The mount point of the device if it is mounted, otherwise None.
|
||||
"""
|
||||
norm = os.path.normpath(device)
|
||||
|
||||
self.logger.debug(f"Checking if {device} is mounted")
|
||||
for mountpoint, mount in self.mounts.items():
|
||||
#self.logger.debug(f"Item: {mount}")
|
||||
#self.logger.debug(f"Checking: " + mount['device'])
|
||||
if mount['device'] == norm:
|
||||
return mountpoint
|
||||
|
||||
return None
|
||||
|
||||
def find_device(self, mountpoint):
|
||||
"""
|
||||
Find the device corresponding to a given mount point.
|
||||
|
||||
Args:
|
||||
mountpoint (str): The mount point to search for.
|
||||
|
||||
Returns:
|
||||
str or None: The device corresponding to the mount point if found,
|
||||
otherwise None.
|
||||
"""
|
||||
self.update_mounts()
|
||||
self.logger.debug("Finding device corresponding to mount point %s", mountpoint)
|
||||
if mountpoint in self.mounts:
|
||||
return self.mounts[mountpoint]['device']
|
||||
else:
|
||||
self.logger.warning("Failed to find mountpoint %s", mountpoint)
|
||||
return None
|
||||
|
||||
def is_mounted(self, device = None, mountpoint = None):
|
||||
def is_mounted(self, device=None, mountpoint=None):
|
||||
"""
|
||||
Check if a device or mountpoint is currently mounted.
|
||||
|
||||
Either checking by device or mountpoint is valid.
|
||||
|
||||
Args:
|
||||
device (str, optional): The device to check if it is mounted.
|
||||
Defaults to None.
|
||||
mountpoint (str, optional): The mountpoint to check if it is mounted.
|
||||
Defaults to None.
|
||||
|
||||
Returns:
|
||||
bool: True if the device is mounted or the mountpoint is in the list
|
||||
of mounts, False otherwise.
|
||||
"""
|
||||
self.update_mounts()
|
||||
if device:
|
||||
return not self.find_mountpoint(device) is None
|
||||
else:
|
||||
return mountpoint in self.mounts
|
||||
|
||||
def unmount(self, device = None, mountpoint = None):
|
||||
def unmount(self, device=None, mountpoint=None):
|
||||
"""
|
||||
Unmounts a filesystem.
|
||||
|
||||
This method unmounts a filesystem either by the device name or the mountpoint.
|
||||
If a device is provided, it finds the corresponding mountpoint and unmounts it.
|
||||
If a mountpoint is provided directly, it unmounts the filesystem at that mountpoint.
|
||||
|
||||
Args:
|
||||
device (str, optional): The device name to unmount. Defaults to None.
|
||||
mountpoint (str, optional): The mountpoint to unmount. Defaults to None.
|
||||
|
||||
Raises:
|
||||
subprocess.CalledProcessError: If the unmount command fails.
|
||||
|
||||
Logs:
|
||||
Debug information about the unmounting process.
|
||||
"""
|
||||
if device:
|
||||
self.logger.debug("Finding mountpoint of %s", device)
|
||||
mountpoint = self.find_mountpoint(device)
|
||||
|
||||
if not mountpoint is None:
|
||||
self.logger.debug(f"Unmounting {mountpoint}")
|
||||
|
||||
done = False
|
||||
start_time = time.time()
|
||||
timeout = 60
|
||||
|
||||
|
||||
while not done and (time.time() - start_time) < timeout:
|
||||
ret = subprocess.run(["/usr/bin/umount", mountpoint], check=False, capture_output=True, encoding='utf-8')
|
||||
if ret.returncode == 0:
|
||||
done=True
|
||||
else:
|
||||
if "target is busy" in ret.stderr:
|
||||
self.logger.debug("Filesystem busy, waiting. %.1f seconds left", timeout - (time.time() - start_time))
|
||||
time.sleep(0.1)
|
||||
else:
|
||||
raise subprocess.CalledProcessError(ret.returncode, ret.args, output=ret.stdout, stderr=ret.stderr)
|
||||
|
||||
# We've unmounted a new filesystem, update our filesystems list
|
||||
self.update_mounts()
|
||||
else:
|
||||
self.logger.debug(f"{device} is not mounted")
|
||||
|
||||
|
||||
def mount(self, device, mountpoint, filesystem = None):
|
||||
"""
|
||||
Mounts a device to a specified mountpoint.
|
||||
|
||||
Parameters:
|
||||
device (str): The device to be mounted (e.g., '/dev/sda1').
|
||||
mountpoint (str): The directory where the device will be mounted.
|
||||
filesystem (str, optional): The type of filesystem to be used (e.g., 'ext4', 'ntfs'). Defaults to None.
|
||||
|
||||
Raises:
|
||||
subprocess.CalledProcessError: If the mount command fails.
|
||||
|
||||
Logs:
|
||||
Debug information about the mounting process, including the mount command, return code, stdout, and stderr.
|
||||
|
||||
Side Effects:
|
||||
Creates the mountpoint directory if it does not exist.
|
||||
Updates the internal list of mounted filesystems.
|
||||
"""
|
||||
self.logger.debug(f"Mounting {device} at {mountpoint}")
|
||||
|
||||
if not os.path.exists(mountpoint):
|
||||
self.logger.debug(f"Creating directory {mountpoint}")
|
||||
os.mkdir(mountpoint)
|
||||
|
||||
mount_cmd = ["/usr/bin/mount"]
|
||||
|
||||
if not filesystem is None:
|
||||
mount_cmd = mount_cmd + ["-t", filesystem]
|
||||
|
||||
mount_cmd = mount_cmd + [device, mountpoint]
|
||||
|
||||
self.logger.debug(f"Mount command: {mount_cmd}")
|
||||
result = subprocess.run(mount_cmd, check=True, capture_output = True)
|
||||
|
||||
self.logger.debug(f"retorno: {result.returncode}")
|
||||
self.logger.debug(f"stdout: {result.stdout}")
|
||||
self.logger.debug(f"stderr: {result.stderr}")
|
||||
|
||||
# We've mounted a new filesystem, update our filesystems list
|
||||
self.update_mounts()
|
||||
|
||||
def ensure_mounted(self, device):
|
||||
"""
|
||||
Ensure that the given device is mounted.
|
||||
|
||||
This method attempts to mount the specified device to a path derived from
|
||||
the base mount path and the device's basename. If the device is of type NTFS,
|
||||
it uses the NTFSLibrary to handle the mounting process. For other filesystem
|
||||
types, it uses a generic mount method.
|
||||
|
||||
Args:
|
||||
device (str): The path to the device that needs to be mounted.
|
||||
|
||||
Returns:
|
||||
str: The path where the device is mounted.
|
||||
|
||||
Logs:
|
||||
- Info: When starting the mounting process.
|
||||
- Debug: Various debug information including the mount path, filesystem type,
|
||||
and success message.
|
||||
|
||||
Raises:
|
||||
OSError: If there is an error creating the mount directory or mounting the device.
|
||||
"""
|
||||
|
||||
self.logger.info("Mounting %s", device)
|
||||
|
||||
self.unmount(device = device)
|
||||
path = os.path.join(self.base_mount_path, os.path.basename(device))
|
||||
|
||||
self.logger.debug(f"Will mount repo at {path}")
|
||||
if not os.path.exists(path):
|
||||
os.mkdir(path)
|
||||
|
||||
if self.filesystem_type(device) == "ntfs":
|
||||
self.logger.debug("Handing a NTFS filesystem")
|
||||
|
||||
self._modprobe("ntfs3")
|
||||
self.ntfsfix(device)
|
||||
|
||||
ntfs = NTFSLibrary(self.ntfs_implementation)
|
||||
ntfs.mount_filesystem(device, path)
|
||||
self.update_mounts()
|
||||
|
||||
else:
|
||||
self.logger.debug("Handling a non-NTFS filesystem")
|
||||
self.mount(device, path)
|
||||
|
||||
self.logger.debug("Successfully mounted at %s", path)
|
||||
return path
|
||||
|
||||
|
||||
def filesystem_type(self, device = None, mountpoint = None):
|
||||
"""
|
||||
Determine the filesystem type of a given device or mountpoint.
|
||||
|
||||
Args:
|
||||
device (str, optional): The device to probe. If not provided, the device
|
||||
will be determined based on the mountpoint.
|
||||
mountpoint (str, optional): The mountpoint to find the device for. This
|
||||
is used only if the device is not provided.
|
||||
|
||||
Returns:
|
||||
str: The filesystem type of the device.
|
||||
|
||||
Raises:
|
||||
KeyError: If the filesystem type cannot be determined from the probe.
|
||||
|
||||
Logs:
|
||||
Debug: Logs the process of finding the device, probing the device, and
|
||||
the determined filesystem type.
|
||||
"""
|
||||
|
||||
if device is None:
|
||||
self.logger.debug("Finding device for mountpoint %s", mountpoint)
|
||||
device = self.find_device(mountpoint)
|
||||
|
||||
self.logger.debug(f"Probing {device}")
|
||||
|
||||
pr = blkid.Probe()
|
||||
pr.set_device(device)
|
||||
pr.enable_superblocks(True)
|
||||
pr.set_superblocks_flags(blkid.SUBLKS_TYPE | blkid.SUBLKS_USAGE | blkid.SUBLKS_UUID | blkid.SUBLKS_UUIDRAW | blkid.SUBLKS_LABELRAW)
|
||||
pr.do_safeprobe()
|
||||
|
||||
fstype = pr["TYPE"].decode('utf-8')
|
||||
self.logger.debug(f"FS type is {fstype}")
|
||||
|
||||
return fstype
|
||||
|
||||
def is_filesystem(self, path):
|
||||
"""
|
||||
Check if the given path is a filesystem root.
|
||||
|
||||
Args:
|
||||
path (str): The path to check.
|
||||
|
||||
Returns:
|
||||
bool: True if the path is a filesystem root, False otherwise.
|
||||
"""
|
||||
|
||||
# This is just an alias for better code readability
|
||||
return self.is_mounted(mountpoint = path)
|
||||
|
||||
def create_filesystem(self, fs_type = None, fs_uuid = None, device = None):
|
||||
"""
|
||||
Create a filesystem on the specified device.
|
||||
|
||||
Parameters:
|
||||
fs_type (str): The type of filesystem to create (e.g., 'ntfs', 'ext4', 'xfs', 'btrfs').
|
||||
fs_uuid (str): The UUID to assign to the filesystem.
|
||||
device (str): The device on which to create the filesystem (e.g., '/dev/sda1').
|
||||
|
||||
Raises:
|
||||
RuntimeError: If the filesystem type is not recognized or if the filesystem creation command fails.
|
||||
|
||||
"""
|
||||
|
||||
self.logger.info(f"Creating filesystem {fs_type} with UUID {fs_uuid} in {device}")
|
||||
|
||||
if fs_type == "ntfs" or fs_type == "ntfs3":
|
||||
self.logger.debug("Creating NTFS filesystem")
|
||||
ntfs = NTFSLibrary(self.ntfs_implementation)
|
||||
ntfs.create_filesystem(device, "NTFS")
|
||||
ntfs.modify_uuid(device, fs_uuid)
|
||||
|
||||
else:
|
||||
command = [f"/usr/sbin/mkfs.{fs_type}"]
|
||||
command_args = []
|
||||
|
||||
if fs_type == "ext4" or fs_type == "ext3":
|
||||
command_args = ["-U", fs_uuid, "-F", device]
|
||||
elif fs_type == "xfs":
|
||||
command_args = ["-m", f"uuid={fs_uuid}", "-f", device]
|
||||
elif fs_type == "btrfs":
|
||||
command_args = ["-U", fs_uuid, "-f", device]
|
||||
else:
|
||||
raise RuntimeError(f"Don't know how to create filesystem of type {fs_type}")
|
||||
|
||||
command = command + command_args
|
||||
|
||||
self.logger.debug(f"Creating Linux filesystem of type {fs_type} on {device}, command {command}")
|
||||
result = subprocess.run(command, check = True, capture_output=True)
|
||||
|
||||
self.logger.debug(f"retorno: {result.returncode}")
|
||||
self.logger.debug(f"stdout: {result.stdout}")
|
||||
self.logger.debug(f"stderr: {result.stderr}")
|
||||
|
||||
|
||||
|
||||
def mklostandfound(self, path):
|
||||
"""
|
||||
Recreate the lost+found if necessary.
|
||||
|
||||
When cloning at the root of a filesystem, cleaning the contents
|
||||
removes the lost+found directory. This is a special directory that requires the use of
|
||||
a tool to recreate it.
|
||||
|
||||
It may fail if the filesystem does not need it. We consider this harmless and ignore it.
|
||||
|
||||
The command is entirely skipped on NTFS, as mklost+found may malfunction if run on it,
|
||||
and has no useful purpose.
|
||||
"""
|
||||
if self.is_filesystem(path):
|
||||
if self.filesystem_type(mountpoint=path) == "ntfs":
|
||||
self.logger.debug("Not running mklost+found on NTFS")
|
||||
return
|
||||
|
||||
|
||||
curdir = os.getcwd()
|
||||
result = None
|
||||
|
||||
try:
|
||||
self.logger.debug(f"Re-creating lost+found in {path}")
|
||||
os.chdir(path)
|
||||
result = subprocess.run(["/usr/sbin/mklost+found"], check=True, capture_output=True)
|
||||
except subprocess.SubprocessError as e:
|
||||
self.logger.warning(f"Error running mklost+found: {e}")
|
||||
|
||||
if result:
|
||||
self.logger.debug(f"retorno: {result.returncode}")
|
||||
self.logger.debug(f"stdout: {result.stdout}")
|
||||
self.logger.debug(f"stderr: {result.stderr}")
|
||||
|
||||
os.chdir(curdir)
|
||||
|
||||
def ntfsfix(self, device):
|
||||
"""
|
||||
Run the ntfsfix command on the specified device.
|
||||
|
||||
This method uses the ntfsfix utility to fix common NTFS problems on the given device.
|
||||
|
||||
This allows mounting an unclean NTFS filesystem.
|
||||
|
||||
Args:
|
||||
device (str): The path to the device to be fixed.
|
||||
|
||||
Raises:
|
||||
subprocess.CalledProcessError: If the ntfsfix command fails.
|
||||
"""
|
||||
self.logger.debug(f"Running ntfsfix on {device}")
|
||||
subprocess.run(["/usr/bin/ntfsfix", "-d", device], check=True)
|
||||
|
||||
|
||||
def unload_ntfs(self):
|
||||
"""
|
||||
Unloads the NTFS filesystem module.
|
||||
|
||||
This is a function added as a result of NTFS kernel module troubleshooting,
|
||||
to try to ensure that NTFS code is only active as long as necessary.
|
||||
|
||||
The module is internally loaded as needed, so there's no load_ntfs function.
|
||||
|
||||
It may be removed in the future.
|
||||
|
||||
Raises:
|
||||
RuntimeError: If the module cannot be removed.
|
||||
"""
|
||||
self._rmmod("ntfs3")
|
||||
|
||||
def find_boot_device(self):
|
||||
"""
|
||||
Searches for the EFI boot partition on the system.
|
||||
|
||||
This method scans the system's partitions to locate the EFI boot partition,
|
||||
which is identified by the GUID "C12A7328-F81F-11D2-BA4B-00A0C93EC93B".
|
||||
|
||||
Returns:
|
||||
str: The device node of the EFI partition if found, otherwise None.
|
||||
|
||||
Logs:
|
||||
- Debug messages indicating the progress of the search.
|
||||
- A warning message if the EFI partition is not found.
|
||||
"""
|
||||
disks = []
|
||||
|
||||
self.logger.debug("Looking for EFI partition")
|
||||
with open("/proc/partitions", "r", encoding='utf-8') as partitions_file:
|
||||
line_num=0
|
||||
for line in partitions_file:
|
||||
if line_num >=2:
|
||||
data = line.split()
|
||||
disk = data[3]
|
||||
disks.append(disk)
|
||||
self.logger.debug(f"Disk: {disk}")
|
||||
|
||||
line_num = line_num + 1
|
||||
|
||||
for disk in disks:
|
||||
self.logger.debug("Loading partitions for disk %s", disk)
|
||||
#disk_json_data = subprocess.run(["/usr/sbin/sfdisk", "-J", f"/dev/{disk}"], check=False, capture_output=True)
|
||||
sfdisk_out = subprocess.run(["/usr/sbin/sfdisk", "-J", f"/dev/{disk}"], check=False, capture_output=True)
|
||||
|
||||
if sfdisk_out.returncode == 0:
|
||||
disk_json_data = sfdisk_out.stdout
|
||||
disk_data = json.loads(disk_json_data)
|
||||
|
||||
for part in disk_data["partitiontable"]["partitions"]:
|
||||
self.logger.debug("Checking partition %s", part)
|
||||
if part["type"] == "C12A7328-F81F-11D2-BA4B-00A0C93EC93B":
|
||||
self.logger.debug("EFI partition found at %s", part["node"])
|
||||
return part["node"]
|
||||
else:
|
||||
self.logger.debug("sfdisk returned with code %i, error %s", sfdisk_out.returncode, sfdisk_out.stderr)
|
||||
|
||||
|
||||
self.logger.warning("Failed to find EFI partition!")
|
||||
|
||||
def temp_unmount(self, mountpoint):
|
||||
"""
|
||||
Temporarily unmounts the filesystem at the given mountpoint.
|
||||
|
||||
This method finds the device associated with the specified mountpoint,
|
||||
and returns the information to remount it with temp_remount.
|
||||
|
||||
The purpose of this function is to temporarily unmount a filesystem for
|
||||
actions like fsck, and to mount it back afterwards.
|
||||
|
||||
Args:
|
||||
mountpoint (str): The mountpoint of the filesystem to unmount.
|
||||
|
||||
Returns:
|
||||
dict: A dictionary containing the information needed to remount the filesystem.
|
||||
"""
|
||||
device = self.find_device(mountpoint)
|
||||
fs = self.filesystem_type(mountpoint = mountpoint)
|
||||
|
||||
data = {"mountpoint" : mountpoint, "device" :device, "filesystem" : fs}
|
||||
|
||||
self.logger.debug("Temporarily unmounting device %s, mounted on %s, fs type %s", mountpoint, device, fs)
|
||||
|
||||
self.unmount(mountpoint = mountpoint)
|
||||
return data
|
||||
|
||||
def temp_remount(self, unmount_data):
|
||||
"""
|
||||
Remounts a filesystem unmounted with temp_unmount
|
||||
|
||||
This method remounts a filesystem using the data provided by temp_unmount
|
||||
|
||||
Args:
|
||||
unmount_data (dict): A dictionary containing the data needed to remount the filesystem.
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
|
||||
self.logger.debug("Remounting temporarily unmounted device %s on %s, fs type %s", unmount_data["device"], unmount_data["mountpoint"], unmount_data["filesystem"])
|
||||
self.mount(device = unmount_data["device"], mountpoint=unmount_data["mountpoint"], filesystem=unmount_data["filesystem"])
|
845
gitlib/gitlib.py
|
@ -0,0 +1,22 @@
|
|||
|
||||
|
||||
def parse_kernel_cmdline():
|
||||
"""Parse the kernel arguments to obtain configuration parameters in Oglive
|
||||
|
||||
OpenGnsys passes data in the kernel arguments, for example:
|
||||
[...] group=Aula_virtual ogrepo=192.168.2.1 oglive=192.168.2.1 [...]
|
||||
|
||||
Returns:
|
||||
dict: Dict of configuration parameters and their values.
|
||||
"""
|
||||
params = {}
|
||||
|
||||
with open("/proc/cmdline", encoding='utf-8') as cmdline:
|
||||
line = cmdline.readline()
|
||||
parts = line.split()
|
||||
for part in parts:
|
||||
if "=" in part:
|
||||
key, value = part.split("=")
|
||||
params[key] = value
|
||||
|
||||
return params
|
|
@ -0,0 +1,111 @@
|
|||
|
||||
import logging
|
||||
import subprocess
|
||||
|
||||
from enum import Enum
|
||||
|
||||
|
||||
class NTFSImplementation(Enum):
|
||||
KERNEL = 1
|
||||
NTFS3G = 2
|
||||
|
||||
|
||||
class NTFSLibrary:
|
||||
"""
|
||||
A library for managing NTFS filesystems.
|
||||
|
||||
Attributes:
|
||||
logger (logging.Logger): Logger for the class.
|
||||
implementation (NTFSImplementation): The implementation to use for mounting NTFS filesystems.
|
||||
"""
|
||||
|
||||
def __init__(self, implementation):
|
||||
"""
|
||||
Initializes the instance with the given implementation.
|
||||
|
||||
Args:
|
||||
implementation: The implementation to be used by the instance.
|
||||
|
||||
Attributes:
|
||||
logger (logging.Logger): Logger instance for the class, set to debug level.
|
||||
implementation: The implementation provided during initialization.
|
||||
"""
|
||||
self.logger = logging.getLogger("NTFSLibrary")
|
||||
self.logger.setLevel(logging.DEBUG)
|
||||
self.implementation = implementation
|
||||
|
||||
self.logger.debug("Initializing")
|
||||
|
||||
def create_filesystem(self, device, label):
|
||||
"""
|
||||
Creates an NTFS filesystem on the specified device with the given label.
|
||||
|
||||
Args:
|
||||
device (str): The device path where the NTFS filesystem will be created.
|
||||
label (str): The label to assign to the NTFS filesystem.
|
||||
|
||||
Returns:
|
||||
None
|
||||
|
||||
Logs:
|
||||
Logs the creation process with the device and label information.
|
||||
"""
|
||||
self.logger.info(f"Creating NTFS in {device} with label {label}")
|
||||
|
||||
subprocess.run(["/usr/sbin/mkntfs", device, "-Q", "-L", label], check=True)
|
||||
|
||||
|
||||
def mount_filesystem(self, device, mountpoint):
|
||||
"""
|
||||
Mounts a filesystem on the specified mountpoint using the specified NTFS implementation.
|
||||
|
||||
Args:
|
||||
device (str): The device path to be mounted (e.g., '/dev/sda1').
|
||||
mountpoint (str): The directory where the device will be mounted.
|
||||
|
||||
Raises:
|
||||
ValueError: If the NTFS implementation is unknown.
|
||||
|
||||
"""
|
||||
self.logger.info(f"Mounting {device} in {mountpoint} using implementation {self.implementation}")
|
||||
if self.implementation == NTFSImplementation.KERNEL:
|
||||
subprocess.run(["/usr/bin/mount", "-t", "ntfs3", device, mountpoint], check = True)
|
||||
elif self.implementation == NTFSImplementation.NTFS3G:
|
||||
subprocess.run(["/usr/bin/ntfs-3g", device, mountpoint], check = True)
|
||||
else:
|
||||
raise ValueError("Unknown NTFS implementation: {self.implementation}")
|
||||
|
||||
def modify_uuid(self, device, uuid):
|
||||
"""
|
||||
Modify the UUID of an NTFS device.
|
||||
|
||||
This function changes the UUID of the specified NTFS device to the given UUID.
|
||||
It reads the current UUID from the device, logs the change, and writes the new UUID.
|
||||
|
||||
Args:
|
||||
device (str): The path to the NTFS device file.
|
||||
uuid (str): The new UUID to be set, in hexadecimal string format.
|
||||
|
||||
Raises:
|
||||
IOError: If there is an error opening or writing to the device file.
|
||||
"""
|
||||
|
||||
ntfs_uuid_offset = 0x48
|
||||
ntfs_uuid_length = 8
|
||||
|
||||
binary_uuid = bytearray.fromhex(uuid)
|
||||
binary_uuid.reverse()
|
||||
|
||||
self.logger.info(f"Changing UUID on {device} to {uuid}")
|
||||
with open(device, 'r+b') as ntfs_dev:
|
||||
self.logger.debug("Reading %i bytes from offset %i", ntfs_uuid_length, ntfs_uuid_offset)
|
||||
|
||||
ntfs_dev.seek(ntfs_uuid_offset)
|
||||
prev_uuid = bytearray(ntfs_dev.read(ntfs_uuid_length))
|
||||
prev_uuid.reverse()
|
||||
prev_uuid_hex = bytearray.hex(prev_uuid)
|
||||
self.logger.debug(f"Previous UUID: {prev_uuid_hex}")
|
||||
|
||||
self.logger.debug("Writing...")
|
||||
ntfs_dev.seek(ntfs_uuid_offset)
|
||||
ntfs_dev.write(binary_uuid)
|
|
@ -1,9 +1,11 @@
|
|||
gitdb==4.0.11
|
||||
GitPython==3.1.43
|
||||
libarchive==0.4.7
|
||||
libarchive-c==5.1
|
||||
nose==1.3.7
|
||||
pathlib==1.0.1
|
||||
pkg_resources==0.0.0
|
||||
pylibacl==0.7.0
|
||||
pylibblkid==0.3
|
||||
pyxattr==0.8.1
|
||||
smmap==5.0.1
|
||||
tqdm==4.66.5
|
||||
|
|
|
@ -0,0 +1,57 @@
|
|||
# Installing Dependencies for Python
|
||||
|
||||
Converting the code to Python 3 currently requires the packages specified in `requirements.txt`.
|
||||
|
||||
To install Python dependencies, the `venv` module (https://docs.python.org/3/library/venv.html) is used, which installs all dependencies in an isolated environment separate from the system.
|
||||
|
||||
# Quick Installation
|
||||
|
||||
## Ubuntu 24.04
|
||||
|
||||
sudo apt install python3-git opengnsys-libarchive-c python3-termcolor bsdextrautils
|
||||
|
||||
## Add SSH Keys to oglive
|
||||
|
||||
The Git system accesses the ogrepository via SSH. To work, it needs the oglive to have an SSH key, and the ogrepository must accept it.
|
||||
|
||||
The Git installer can make the required changes with:
|
||||
|
||||
./opengnsys_git_installer.py --set-ssh-key
|
||||
|
||||
Or to do it for a specific oglive:
|
||||
|
||||
./opengnsys_git_installer.py --set-ssh-key --oglive 1 # oglive number
|
||||
|
||||
Running this command automatically adds the SSH key to Forgejo.
|
||||
|
||||
The existing key can be extracted with:
|
||||
|
||||
./opengnsys_git_installer.py --extract-ssh-key --quiet
|
||||
|
||||
# Running the Installer
|
||||
|
||||
# ./opengnsys_git_installer.py
|
||||
|
||||
It must be run as `root`.
|
||||
|
||||
The installer downloads and installs Forgejo, a web interface for Git. The configuration is automatically generated.
|
||||
|
||||
Forgejo manages the repositories and SSH access, so it must always be running. By default, it is installed on port 3000.
|
||||
|
||||
The default user is `oggit` with the password `opengnsys`.
|
||||
|
||||
# Packages with Dependencies
|
||||
|
||||
The OgGit system requires Python modules that are not included in Ubuntu 24.04 or have outdated versions.
|
||||
|
||||
The package sources can be found in oggit/packages.
|
||||
|
||||
# Source Code Documentation
|
||||
|
||||
Python documentation can be generated using a utility like pdoc3 (there are multiple possible alternatives):
|
||||
|
||||
# Install pdoc3
|
||||
pip install --user pdoc3
|
||||
|
||||
# Generate documentation
|
||||
pdoc3 --force --html opengnsys_git_installer.py
|
|
@ -7,51 +7,51 @@ Para instalar dependencias de python se usa el modulo venv (https://docs.python.
|
|||
|
||||
# Instalación rápida
|
||||
|
||||
## Ubuntu 24.04
|
||||
|
||||
## Distribuciones antiguas (18.04)
|
||||
|
||||
**Nota:** En 18.04, `uname` solo se encuentra en `/bin`, lo que causa un error inocuo en el log durante la creación de los repositorios:
|
||||
|
||||
Failed checking if running in CYGWIN due to: FileNotFoundError(2, 'No such file or directory')
|
||||
|
||||
Se arregla con el symlink incluido en las instrucciones mas abajo.
|
||||
sudo apt install python3-git opengnsys-libarchive-c python3-termcolor bsdextrautils
|
||||
|
||||
|
||||
sudo apt install -y python3.8 python3.8-venv python3-venv libarchive-dev
|
||||
sudo ln -sf /bin/uname /usr/bin/
|
||||
python3.8 -m venv venvog
|
||||
. venvog/bin/activate
|
||||
python3.8 -m pip install --upgrade pip
|
||||
pip3 install -r requirements.txt
|
||||
## Agregar claves de SSH a oglive
|
||||
|
||||
Ejecutar con:
|
||||
El sistema de Git accede al ogrepository por SSH. Para funcionar, necesita que el oglive tenga una clave de SSH, y que el ogrepository la acepte.
|
||||
|
||||
python3.8 ./opengnsys_git_installer.py
|
||||
El instalador de Git puede realizar los cambios requeridos, con:
|
||||
|
||||
## Distribuciones nuevas (22.04)
|
||||
./opengnsys_git_installer.py --set-ssh-key
|
||||
|
||||
sudo apt install python3 python3-venv libarchive-dev
|
||||
python3 -m venv venvog
|
||||
. venvog/bin/activate
|
||||
python3 -m pip install --upgrade pip
|
||||
pip3 install -r requirements.txt
|
||||
O para hacerlo contra un oglive especifico:
|
||||
|
||||
## Agregar clave de SSH si es necesario
|
||||
./opengnsys_git_installer.py --set-ssh-key --oglive 1 # numero de oglive
|
||||
|
||||
El proceso falla si no hay clave de SSH en la imagen. Utilizar:
|
||||
Ejecutar este comando agrega la clave de SSH a Forgejo automáticamente.
|
||||
|
||||
/opt/opengnsys/bin/setsslkey
|
||||
|
||||
para agregarla.
|
||||
La clave existente puede extraerse con:
|
||||
|
||||
./opengnsys_git_installer.py --extract-ssh-key --quiet
|
||||
|
||||
# Ejecutar
|
||||
|
||||
**Nota:** Preferiblemente ejecutar como `root`, ya que `sudo` borra los cambios a las variables de entorno realizadas por venv. El resultado probable es un error de falta de módulos de Python, o un fallo del programa por usar dependencias demasiado antiguas.
|
||||
|
||||
# . venv/bin/activate
|
||||
# ./opengnsys_git_installer.py
|
||||
|
||||
# Documentación
|
||||
Debe ejecutarse como `root`.
|
||||
|
||||
El instalador descarga e instala Forgejo, un interfaz web de Git. La configuración se genera automáticamente.
|
||||
|
||||
Forgejo gestiona los repositorios y el acceso por SSH, por lo cual debe quedarse siempre corriendo. Por defecto se instala en el puerto 3000.
|
||||
|
||||
El usuario por defecto es `oggit` con password `opengnsys`.
|
||||
|
||||
|
||||
# Paquetes con dependencias
|
||||
|
||||
El sistema OgGit requiere módulos de Python que no vienen en Ubuntu 24.04 o tienen versiones demasiado antiguas.
|
||||
|
||||
Los fuentes de los paquetes se encuentran en oggit/packages.
|
||||
|
||||
# Documentación de código fuente
|
||||
|
||||
Se puede generar documentación de Python con una utilidad como pdoc3 (hay multiples alternativas posibles):
|
||||
|
||||
|
|
|
@ -0,0 +1,78 @@
|
|||
APP_NAME = OpenGnsys Git
|
||||
APP_SLOGAN =
|
||||
RUN_USER = {forgejo_user}
|
||||
WORK_PATH = {forgejo_work_path}
|
||||
RUN_MODE = prod
|
||||
|
||||
[database]
|
||||
DB_TYPE = sqlite3
|
||||
HOST = 127.0.0.1:3306
|
||||
NAME = forgejo
|
||||
USER = forgejo
|
||||
PASSWD =
|
||||
SCHEMA =
|
||||
SSL_MODE = disable
|
||||
PATH = {forgejo_db_path}
|
||||
LOG_SQL = false
|
||||
|
||||
[repository]
|
||||
ROOT = {forgejo_repository_root}
|
||||
|
||||
[server]
|
||||
SSH_DOMAIN = og-admin
|
||||
DOMAIN = og-admin
|
||||
HTTP_PORT = {forgejo_port}
|
||||
ROOT_URL = http://{forgejo_hostname}:{forgejo_port}/
|
||||
APP_DATA_PATH = {forgejo_data_path}
|
||||
DISABLE_SSH = false
|
||||
SSH_PORT = 22
|
||||
LFS_START_SERVER = true
|
||||
LFS_JWT_SECRET = {forgejo_lfs_jwt_secret}
|
||||
OFFLINE_MODE = true
|
||||
|
||||
[lfs]
|
||||
PATH = {forgejo_lfs_path}
|
||||
|
||||
[mailer]
|
||||
ENABLED = false
|
||||
|
||||
[service]
|
||||
REGISTER_EMAIL_CONFIRM = false
|
||||
ENABLE_NOTIFY_MAIL = false
|
||||
DISABLE_REGISTRATION = true
|
||||
ALLOW_ONLY_EXTERNAL_REGISTRATION = false
|
||||
ENABLE_CAPTCHA = false
|
||||
REQUIRE_SIGNIN_VIEW = false
|
||||
DEFAULT_KEEP_EMAIL_PRIVATE = false
|
||||
DEFAULT_ALLOW_CREATE_ORGANIZATION = true
|
||||
DEFAULT_ENABLE_TIMETRACKING = true
|
||||
NO_REPLY_ADDRESS = noreply.localhost
|
||||
|
||||
[openid]
|
||||
ENABLE_OPENID_SIGNIN = true
|
||||
ENABLE_OPENID_SIGNUP = true
|
||||
|
||||
[cron.update_checker]
|
||||
ENABLED = true
|
||||
|
||||
[session]
|
||||
PROVIDER = file
|
||||
|
||||
[log]
|
||||
MODE = console
|
||||
LEVEL = info
|
||||
ROOT_PATH = {forgejo_log_path} #/tmp/log
|
||||
|
||||
[repository.pull-request]
|
||||
DEFAULT_MERGE_STYLE = merge
|
||||
|
||||
[repository.signing]
|
||||
DEFAULT_TRUST_MODEL = committer
|
||||
|
||||
[security]
|
||||
INSTALL_LOCK = true
|
||||
INTERNAL_TOKEN = {forgejo_internal_token}
|
||||
PASSWORD_HASH_ALGO = pbkdf2_hi
|
||||
|
||||
[oauth2]
|
||||
JWT_SECRET = {forgejo_jwt_secret}
|
|
@ -0,0 +1,11 @@
|
|||
[Service]
|
||||
RestartSec=10s
|
||||
Type=simple
|
||||
User={forgejo_user}
|
||||
Group={forgejo_group}
|
||||
WorkingDirectory={forgejo_work_path}
|
||||
ExecStart={forgejo_bin} web --config {forgejo_app_ini}
|
||||
Restart=always
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
|
@ -2,6 +2,10 @@
|
|||
"""Script para la instalación del repositorio git"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
sys.path.insert(0, "/usr/share/opengnsys-modules/python3/dist-packages")
|
||||
|
||||
|
||||
import shutil
|
||||
import argparse
|
||||
import tempfile
|
||||
|
@ -10,9 +14,26 @@ import subprocess
|
|||
import sys
|
||||
import pwd
|
||||
import grp
|
||||
from termcolor import colored, cprint
|
||||
from termcolor import cprint
|
||||
import git
|
||||
import libarchive
|
||||
from libarchive.extract import *
|
||||
|
||||
#from libarchive.entry import FileType
|
||||
import urllib.request
|
||||
import pathlib
|
||||
import socket
|
||||
import time
|
||||
import requests
|
||||
import tempfile
|
||||
import hashlib
|
||||
import datetime
|
||||
|
||||
#FORGEJO_VERSION="8.0.3"
|
||||
FORGEJO_VERSION="9.0.0"
|
||||
FORGEJO_URL=f"https://codeberg.org/forgejo/forgejo/releases/download/v{FORGEJO_VERSION}/forgejo-{FORGEJO_VERSION}-linux-amd64"
|
||||
|
||||
|
||||
|
||||
|
||||
def show_error(*args):
|
||||
|
@ -27,6 +48,7 @@ def show_error(*args):
|
|||
"""
|
||||
cprint(*args, "red", attrs = ["bold"], file=sys.stderr)
|
||||
|
||||
|
||||
class RequirementException(Exception):
|
||||
"""Excepción que indica que nos falta algún requisito
|
||||
|
||||
|
@ -60,6 +82,7 @@ class Oglive:
|
|||
|
||||
def __init__(self):
|
||||
self.__logger = logging.getLogger("Oglive")
|
||||
|
||||
self.binary = "/opt/opengnsys/bin/oglivecli"
|
||||
self.__logger.debug("Inicializando")
|
||||
|
||||
|
@ -100,16 +123,25 @@ class OpengnsysGitInstaller:
|
|||
self.testmode = False
|
||||
self.base_path = "/opt/opengnsys"
|
||||
self.git_basedir = "base.git"
|
||||
self.ssh_user = "opengnsys"
|
||||
self.ssh_group = "opengnsys"
|
||||
self.email = "OpenGnsys@opengnsys.com"
|
||||
|
||||
self.forgejo_user = "oggit"
|
||||
self.forgejo_password = "opengnsys"
|
||||
self.forgejo_organization = "opengnsys"
|
||||
self.forgejo_port = 3000
|
||||
|
||||
self.set_ssh_user_group("oggit", "oggit")
|
||||
|
||||
self.ssh_homedir = pwd.getpwnam(self.ssh_user).pw_dir
|
||||
self.ssh_uid = pwd.getpwnam(self.ssh_user).pw_uid
|
||||
self.ssh_gid = grp.getgrnam(self.ssh_group).gr_gid
|
||||
self.temp_dir = None
|
||||
self.script_path = os.path.realpath(os.path.dirname(__file__))
|
||||
|
||||
# Possible names for SSH key
|
||||
# Possible names for SSH public keys
|
||||
self.ssh_key_users = ["root", "opengnsys"]
|
||||
self.key_names = ["id_rsa.pub", "id_ed25519.pub", "id_ecdsa.pub", "id_ed25519_sk.pub", "id_ecdsa_sk.pub"]
|
||||
|
||||
# Possible names for SSH key in oglive
|
||||
self.key_paths = ["scripts/ssl/id_rsa.pub", "scripts/ssl/id_ed25519.pub", "scripts/ssl/id_ecdsa.pub", "scripts/ssl/id_ed25519_sk.pub", "scripts/ssl/id_ecdsa_sk.pub"]
|
||||
|
||||
self.key_paths_dict = {}
|
||||
|
||||
for kp in self.key_paths:
|
||||
|
@ -157,7 +189,33 @@ class OpengnsysGitInstaller:
|
|||
if self.temp_dir:
|
||||
shutil.rmtree(self.temp_dir, ignore_errors=True)
|
||||
|
||||
def _init_git_repo(self, reponame):
|
||||
def set_ssh_user_group(self, username, groupname):
|
||||
|
||||
|
||||
self.ssh_group = groupname
|
||||
self.ssh_user = username
|
||||
|
||||
try:
|
||||
self.ssh_gid = grp.getgrnam(self.ssh_group).gr_gid
|
||||
self.__logger.info("Group %s exists with gid %i", self.ssh_group, self.ssh_gid)
|
||||
except KeyError:
|
||||
self.__logger.info("Need to create group %s", self.ssh_group)
|
||||
subprocess.run(["/usr/sbin/groupadd", "--system", self.ssh_group], check=True)
|
||||
self.ssh_gid = grp.getgrnam(groupname).gr_gid
|
||||
|
||||
|
||||
try:
|
||||
self.ssh_uid = pwd.getpwnam(self.ssh_user).pw_uid
|
||||
self.__logger.info("User %s exists with gid %i", self.ssh_user, self.ssh_uid)
|
||||
except KeyError:
|
||||
self.__logger.info("Need to create user %s", self.ssh_user)
|
||||
subprocess.run(["/usr/sbin/useradd", "--gid", str(self.ssh_gid), "-m", "--system", self.ssh_user], check=True)
|
||||
self.ssh_uid = pwd.getpwnam(username).pw_uid
|
||||
|
||||
self.ssh_homedir = pwd.getpwnam(username).pw_dir
|
||||
|
||||
|
||||
def init_git_repo(self, reponame):
|
||||
"""Inicializa un repositorio Git"""
|
||||
# Creamos repositorio
|
||||
ogdir_images = os.path.join(self.base_path, "images")
|
||||
|
@ -180,7 +238,7 @@ class OpengnsysGitInstaller:
|
|||
|
||||
self.__logger.info("Configurando repositorio de GIT")
|
||||
repo.config_writer().set_value("user", "name", "OpenGnsys").release()
|
||||
repo.config_writer().set_value("user", "email", "OpenGnsys@opengnsys.com").release()
|
||||
repo.config_writer().set_value("user", "email", self.email).release()
|
||||
|
||||
self._recursive_chown(repo_path, ouid=self.ssh_uid, ogid=self.ssh_gid)
|
||||
|
||||
|
@ -209,6 +267,272 @@ class OpengnsysGitInstaller:
|
|||
for filename in filenames:
|
||||
os.chown(os.path.join(dirpath, filename), uid=ouid, gid=ogid)
|
||||
|
||||
def _wait_for_port(self, host, port):
|
||||
self.__logger.info("Waiting for %s:%i to be up", host, port)
|
||||
|
||||
timeout = 60
|
||||
start_time = time.time()
|
||||
|
||||
ready = False
|
||||
while not ready and (time.time() - start_time) < 60:
|
||||
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
try:
|
||||
s.connect((host, port))
|
||||
ready = True
|
||||
s.close()
|
||||
except TimeoutError:
|
||||
self.__logger.debug("Timed out, no connection yet.")
|
||||
except OSError as oserr:
|
||||
self.__logger.debug("%s, no connection yet. %.1f seconds left.", oserr.strerror, timeout - (time.time() - start_time))
|
||||
|
||||
time.sleep(0.1)
|
||||
|
||||
if ready:
|
||||
self.__logger.info("Connection established.")
|
||||
else:
|
||||
self.__logger.error("Timed out waiting for connection!")
|
||||
raise TimeoutError("Timed out waiting for connection!")
|
||||
|
||||
|
||||
def add_ssh_key_from_squashfs(self, oglive_num = None):
|
||||
|
||||
if oglive_num is None:
|
||||
self.__logger.info("Using default oglive")
|
||||
oglive_num = int(self.oglive.get_default())
|
||||
else:
|
||||
self.__logger.info("Using oglive %i", oglive_num)
|
||||
|
||||
oglive_client = self.oglive.get_clients()[str(oglive_num)]
|
||||
self.__logger.info("Oglive is %s", oglive_client)
|
||||
|
||||
keys = installer.extract_ssh_keys(oglive_num = oglive_num)
|
||||
for k in keys:
|
||||
timestamp = '{:%Y-%m-%d %H:%M:%S}'.format(datetime.datetime.now())
|
||||
installer.add_forgejo_sshkey(k, f"Key for {oglive_client} ({timestamp})")
|
||||
|
||||
|
||||
|
||||
def extract_ssh_keys(self, oglive_num = None):
|
||||
public_keys = []
|
||||
|
||||
|
||||
squashfs = "ogclient.sqfs"
|
||||
|
||||
tftp_dir = os.path.join(self.base_path, "tftpboot")
|
||||
|
||||
if oglive_num is None:
|
||||
self.__logger.info("Reading from default oglive")
|
||||
oglive_num = self.oglive.get_default()
|
||||
else:
|
||||
self.__logger.info("Reading from oglive %i", oglive_num)
|
||||
|
||||
oglive_client = self.oglive.get_clients()[str(oglive_num)]
|
||||
self.__logger.info("Oglive is %s", oglive_client)
|
||||
|
||||
client_squashfs_path = os.path.join(tftp_dir, oglive_client, squashfs)
|
||||
|
||||
self.__logger.info("Mounting %s", client_squashfs_path)
|
||||
mount_tempdir = tempfile.TemporaryDirectory()
|
||||
ssh_keys_dir = os.path.join(mount_tempdir.name, "root", ".ssh")
|
||||
|
||||
subprocess.run(["mount", client_squashfs_path, mount_tempdir.name], check=True)
|
||||
for file in os.listdir(ssh_keys_dir):
|
||||
full_path = os.path.join(ssh_keys_dir, file)
|
||||
|
||||
if file.endswith(".pub"):
|
||||
self.__logger.info("Found public key: %s", full_path)
|
||||
|
||||
with open(full_path, "r", encoding="utf-8") as keyfile:
|
||||
keydata = keyfile.read().strip()
|
||||
public_keys = public_keys + [keydata]
|
||||
|
||||
|
||||
subprocess.run(["umount", mount_tempdir.name], check=True)
|
||||
|
||||
return public_keys
|
||||
|
||||
|
||||
def _extract_ssh_key_from_initrd(self):
|
||||
public_key=""
|
||||
|
||||
INITRD = "oginitrd.img"
|
||||
|
||||
tftp_dir = os.path.join(self.base_path, "tftpboot")
|
||||
default_num = self.oglive.get_default()
|
||||
default_client = self.oglive.get_clients()[default_num]
|
||||
client_initrd_path = os.path.join(tftp_dir, default_client, INITRD)
|
||||
|
||||
#self.temp_dir = self._get_tempdir()
|
||||
|
||||
if self.usesshkey:
|
||||
with open(self.usesshkey, 'r') as f:
|
||||
public_key = f.read().strip()
|
||||
|
||||
else:
|
||||
if os.path.isfile(client_initrd_path):
|
||||
#os.makedirs(temp_dir, exist_ok=True)
|
||||
#os.chdir(self.temp_dir.name)
|
||||
self.__logger.debug("Descomprimiendo %s", client_initrd_path)
|
||||
public_key = None
|
||||
with libarchive.file_reader(client_initrd_path) as initrd:
|
||||
for file in initrd:
|
||||
self.__logger.debug("Archivo: %s", file)
|
||||
|
||||
pathname = file.pathname;
|
||||
if pathname.startswith("./"):
|
||||
pathname = pathname[2:]
|
||||
|
||||
if pathname in self.key_paths_dict:
|
||||
data = bytearray()
|
||||
for block in file.get_blocks():
|
||||
data = data + block
|
||||
public_key = data.decode('utf-8').strip()
|
||||
|
||||
break
|
||||
else:
|
||||
print(f"No se encuentra la imagen de initrd {client_initrd_path}")
|
||||
exit(2)
|
||||
|
||||
return public_key
|
||||
|
||||
def set_ssh_key_in_initrd(self, client_num = None):
|
||||
INITRD = "oginitrd.img"
|
||||
|
||||
tftp_dir = os.path.join(self.base_path, "tftpboot")
|
||||
|
||||
if client_num is None:
|
||||
self.__logger.info("Will modify default client")
|
||||
client_num = self.oglive.get_default()
|
||||
|
||||
|
||||
ogclient = self.oglive.get_clients()[client_num]
|
||||
client_initrd_path = os.path.join(tftp_dir, ogclient, INITRD)
|
||||
client_initrd_path_new = client_initrd_path + ".new"
|
||||
|
||||
self.__logger.debug("initrd path for ogclient %s is %s", ogclient, client_initrd_path)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
temp_dir = tempfile.TemporaryDirectory()
|
||||
temp_dir_path = temp_dir.name
|
||||
|
||||
#temp_dir_path = "/tmp/extracted"
|
||||
if os.path.exists(temp_dir_path):
|
||||
shutil.rmtree(temp_dir_path)
|
||||
|
||||
|
||||
pathlib.Path(temp_dir_path).mkdir(parents=True, exist_ok = True)
|
||||
|
||||
self.__logger.debug("Uncompressing initrd %s into %s", client_initrd_path, temp_dir_path)
|
||||
os.chdir(temp_dir_path)
|
||||
libarchive.extract_file(client_initrd_path, flags = EXTRACT_UNLINK | EXTRACT_OWNER | EXTRACT_PERM | EXTRACT_FFLAGS | EXTRACT_TIME)
|
||||
ssh_key_dir = os.path.join(temp_dir_path, "scripts", "ssl")
|
||||
|
||||
client_key_path = os.path.join(ssh_key_dir, "id_ed25519")
|
||||
authorized_keys_path = os.path.join(ssh_key_dir, "authorized_keys")
|
||||
|
||||
oglive_public_key = ""
|
||||
|
||||
|
||||
# Create a SSH key on the oglive, if needed
|
||||
pathlib.Path(ssh_key_dir).mkdir(parents=True, exist_ok=True)
|
||||
if os.path.exists(client_key_path):
|
||||
self.__logger.info("Creating SSH key not necessary, it already is in the initrd")
|
||||
else:
|
||||
self.__logger.info("Writing new SSH key into %s", client_key_path)
|
||||
subprocess.run(["/usr/bin/ssh-keygen", "-t", "ed25519", "-N", "", "-f", client_key_path], check=True)
|
||||
|
||||
with open(client_key_path + ".pub", "r", encoding="utf-8") as pubkey:
|
||||
oglive_public_key = pubkey.read()
|
||||
|
||||
# Add our public keys to the oglive, so that we can log in
|
||||
public_keys = ""
|
||||
|
||||
for username in self.ssh_key_users:
|
||||
self.__logger.debug("Looking for keys in user %s", username)
|
||||
homedir = pwd.getpwnam(username).pw_dir
|
||||
|
||||
for key in self.key_names:
|
||||
key_path = os.path.join(homedir, ".ssh", key)
|
||||
self.__logger.debug("Checking if we have %s...", key_path)
|
||||
if os.path.exists(key_path):
|
||||
with open(key_path, "r", encoding='utf-8') as public_key_file:
|
||||
self.__logger.info("Adding %s to authorized_keys", key_path)
|
||||
public_key = public_key_file.read()
|
||||
public_keys = public_keys + public_key + "\n"
|
||||
|
||||
self.__logger.debug("Writing %s", authorized_keys_path)
|
||||
with open(authorized_keys_path, "w", encoding='utf-8') as auth_keys:
|
||||
auth_keys.write(public_keys)
|
||||
|
||||
|
||||
|
||||
# hardlinks in the source package are not correctly packaged back as hardlinks.
|
||||
# Taking the easy option of turning them into symlinks for now.
|
||||
file_hashes = {}
|
||||
with libarchive.file_writer(client_initrd_path_new, "cpio_newc", "zstd") as writer:
|
||||
|
||||
file_list = []
|
||||
for root, subdirs, files in os.walk(temp_dir_path):
|
||||
proot = pathlib.PurePosixPath(root)
|
||||
relpath = proot.relative_to(temp_dir_path)
|
||||
|
||||
for file in files:
|
||||
abs_path = os.path.join(root, file)
|
||||
full_path = os.path.join(relpath, file)
|
||||
|
||||
#self.__logger.debug("%s", abs_path)
|
||||
digest = None
|
||||
|
||||
if os.path.islink(abs_path):
|
||||
self.__logger.debug("%s is a symlink", abs_path)
|
||||
continue
|
||||
|
||||
if not os.path.exists(abs_path):
|
||||
self.__logger.debug("%s does not exist", abs_path)
|
||||
continue
|
||||
|
||||
stat_data = os.stat(abs_path)
|
||||
with open(full_path, "rb") as in_file:
|
||||
digest = hashlib.file_digest(in_file, "sha256").hexdigest()
|
||||
|
||||
if stat_data.st_size > 0 and not os.path.islink(full_path):
|
||||
if digest in file_hashes:
|
||||
target_path = pathlib.Path(file_hashes[digest])
|
||||
link_path = target_path.relative_to(relpath, walk_up=True)
|
||||
|
||||
self.__logger.debug("%s was a duplicate of %s, linking to %s", full_path, file_hashes[digest], link_path)
|
||||
|
||||
os.unlink(full_path)
|
||||
#os.link(file_hashes[digest], full_path)
|
||||
os.symlink(link_path, full_path)
|
||||
else:
|
||||
file_hashes[digest] = full_path
|
||||
|
||||
|
||||
writer.add_files(".", recursive=True )
|
||||
|
||||
os.rename(client_initrd_path, client_initrd_path + ".old")
|
||||
|
||||
if os.path.exists(client_initrd_path + ".sum"):
|
||||
os.rename(client_initrd_path + ".sum", client_initrd_path + ".sum.old")
|
||||
|
||||
os.rename(client_initrd_path_new, client_initrd_path)
|
||||
|
||||
|
||||
with open(client_initrd_path, "rb") as initrd_file:
|
||||
hexdigest = hashlib.file_digest(initrd_file, "sha256").hexdigest()
|
||||
with open(client_initrd_path + ".sum", "w", encoding="utf-8") as digest_file:
|
||||
digest_file.write(hexdigest + "\n")
|
||||
|
||||
self.__logger.info("Updated initrd %s", client_initrd_path)
|
||||
|
||||
|
||||
timestamp = '{:%Y-%m-%d %H:%M:%S}'.format(datetime.datetime.now())
|
||||
|
||||
self.add_forgejo_sshkey(oglive_public_key, f"Key for {ogclient} ({timestamp})")
|
||||
|
||||
def install(self):
|
||||
"""Instalar
|
||||
|
@ -254,108 +578,273 @@ class OpengnsysGitInstaller:
|
|||
self.__logger.debug("Instalando dependencias")
|
||||
subprocess.run(["apt-get", "install", "-y", "git"], check=True)
|
||||
|
||||
def _install_template(self, template, destination, keysvalues):
|
||||
|
||||
# Autenticación del usuario opengnsys con clave pública desde los ogLive
|
||||
# Requiere que todos los ogLive tengan la misma clave publica (utilizar setsslkey)
|
||||
self.__logger.info("Writing template %s into %s", template, destination)
|
||||
|
||||
# Tomamos la clave publica del cliente por defecto
|
||||
default_num = self.oglive.get_default()
|
||||
default_client = self.oglive.get_clients()[default_num]
|
||||
data = ""
|
||||
with open(template, "r", encoding="utf-8") as template_file:
|
||||
data = template_file.read()
|
||||
|
||||
for key in keysvalues.keys():
|
||||
data = data.replace("{" + key + "}", keysvalues[key])
|
||||
|
||||
with open(destination, "w+", encoding="utf-8") as out_file:
|
||||
out_file.write(data)
|
||||
|
||||
def _runcmd(self, cmd):
|
||||
self.__logger.debug("Running: %s", cmd)
|
||||
|
||||
ret = subprocess.run(cmd, check=True,capture_output=True, encoding='utf-8')
|
||||
return ret.stdout.strip()
|
||||
|
||||
def install_forgejo(self):
|
||||
self.__logger.info("Installing Forgejo")
|
||||
|
||||
|
||||
client_initrd_path = os.path.join(tftp_dir, default_client, INITRD)
|
||||
self.__logger.debug("Ruta de initrd: %s", client_initrd_path)
|
||||
# Si me salgo con error borro el directorio temporal
|
||||
|
||||
|
||||
if not self.ignoresshkey:
|
||||
public_key=""
|
||||
if self.usesshkey:
|
||||
with open(self.usesshkey, 'r') as f:
|
||||
public_key = f.read().strip()
|
||||
bin_path = os.path.join(self.base_path, "bin", "forgejo")
|
||||
conf_dir_path = os.path.join(self.base_path, "etc", "forgejo")
|
||||
|
||||
|
||||
lfs_dir_path = os.path.join(self.base_path, "images", "git-lfs")
|
||||
git_dir_path = os.path.join(self.base_path, "images", "git")
|
||||
|
||||
forgejo_work_dir_path = os.path.join(self.base_path, "var", "lib", "forgejo/work")
|
||||
forgejo_db_dir_path = os.path.join(self.base_path, "var", "lib", "forgejo/db")
|
||||
forgejo_data_dir_path = os.path.join(self.base_path, "var", "lib", "forgejo/data")
|
||||
|
||||
forgejo_db_path = os.path.join(forgejo_db_dir_path, "forgejo.db")
|
||||
|
||||
forgejo_log_dir_path = os.path.join(self.base_path, "log", "forgejo")
|
||||
|
||||
|
||||
conf_path = os.path.join(conf_dir_path, "app.ini")
|
||||
|
||||
self.__logger.debug("Stopping opengnsys-forgejo service")
|
||||
subprocess.run(["systemctl", "stop", "opengnsys-forgejo"], check=False)
|
||||
|
||||
self.__logger.debug("Downloading from %s into %s", FORGEJO_URL, bin_path)
|
||||
urllib.request.urlretrieve(FORGEJO_URL, bin_path)
|
||||
os.chmod(bin_path, 0o755)
|
||||
|
||||
if os.path.exists(forgejo_db_path):
|
||||
self.__logger.debug("Removing old configuration")
|
||||
os.unlink(forgejo_db_path)
|
||||
else:
|
||||
self.__logger.debug("Old configuration not present, ok.")
|
||||
|
||||
self.__logger.debug("Wiping old data")
|
||||
for dir in [conf_dir_path, git_dir_path, lfs_dir_path, forgejo_work_dir_path, forgejo_data_dir_path, forgejo_db_dir_path]:
|
||||
if os.path.exists(dir):
|
||||
self.__logger.debug("Removing %s", dir)
|
||||
shutil.rmtree(dir)
|
||||
|
||||
self.__logger.debug("Creating directories")
|
||||
|
||||
pathlib.Path(conf_dir_path).mkdir(parents=True, exist_ok=True)
|
||||
pathlib.Path(git_dir_path).mkdir(parents=True, exist_ok=True)
|
||||
pathlib.Path(lfs_dir_path).mkdir(parents=True, exist_ok=True)
|
||||
pathlib.Path(forgejo_work_dir_path).mkdir(parents=True, exist_ok=True)
|
||||
pathlib.Path(forgejo_data_dir_path).mkdir(parents=True, exist_ok=True)
|
||||
pathlib.Path(forgejo_db_dir_path).mkdir(parents=True, exist_ok=True)
|
||||
pathlib.Path(forgejo_log_dir_path).mkdir(parents=True, exist_ok=True)
|
||||
|
||||
|
||||
os.chown(lfs_dir_path, self.ssh_uid, self.ssh_gid)
|
||||
os.chown(git_dir_path, self.ssh_uid, self.ssh_gid)
|
||||
os.chown(forgejo_data_dir_path, self.ssh_uid, self.ssh_gid)
|
||||
os.chown(forgejo_work_dir_path, self.ssh_uid, self.ssh_gid)
|
||||
os.chown(forgejo_db_dir_path, self.ssh_uid, self.ssh_gid)
|
||||
os.chown(forgejo_log_dir_path, self.ssh_uid, self.ssh_gid)
|
||||
|
||||
data = {
|
||||
"forgejo_user" : self.ssh_user,
|
||||
"forgejo_group" : self.ssh_group,
|
||||
"forgejo_port" : str(self.forgejo_port),
|
||||
"forgejo_bin" : bin_path,
|
||||
"forgejo_app_ini" : conf_path,
|
||||
"forgejo_work_path" : forgejo_work_dir_path,
|
||||
"forgejo_data_path" : forgejo_data_dir_path,
|
||||
"forgejo_db_path" : forgejo_db_path,
|
||||
"forgejo_repository_root" : git_dir_path,
|
||||
"forgejo_lfs_path" : lfs_dir_path,
|
||||
"forgejo_log_path" : forgejo_log_dir_path,
|
||||
"forgejo_hostname" : self._runcmd("hostname"),
|
||||
"forgejo_lfs_jwt_secret" : self._runcmd([bin_path,"generate", "secret", "LFS_JWT_SECRET"]),
|
||||
"forgejo_jwt_secret" : self._runcmd([bin_path,"generate", "secret", "JWT_SECRET"]),
|
||||
"forgejo_internal_token" : self._runcmd([bin_path,"generate", "secret", "INTERNAL_TOKEN"]),
|
||||
"forgejo_secret_key" : self._runcmd([bin_path,"generate", "secret", "SECRET_KEY"])
|
||||
}
|
||||
|
||||
self._install_template(os.path.join(self.script_path, "forgejo-app.ini"), conf_path, data)
|
||||
self._install_template(os.path.join(self.script_path, "forgejo.service"), "/etc/systemd/system/opengnsys-forgejo.service", data)
|
||||
|
||||
|
||||
self.__logger.debug("Reloading systemd and starting service")
|
||||
subprocess.run(["systemctl", "daemon-reload"], check=True)
|
||||
subprocess.run(["systemctl", "enable", "opengnsys-forgejo"], check=True)
|
||||
subprocess.run(["systemctl", "restart", "opengnsys-forgejo"], check=True)
|
||||
|
||||
self.__logger.info("Waiting for forgejo to start")
|
||||
self._wait_for_port("localhost", self.forgejo_port)
|
||||
|
||||
|
||||
self.__logger.info("Configuring forgejo")
|
||||
|
||||
def run_forge_cmd(args):
|
||||
cmd = [bin_path, "--config", conf_path] + args
|
||||
self.__logger.debug("Running command: %s", cmd)
|
||||
|
||||
ret = subprocess.run(cmd, check=False, capture_output=True, encoding='utf-8', user=self.ssh_user)
|
||||
if ret.returncode == 0:
|
||||
return ret.stdout.strip()
|
||||
else:
|
||||
if os.path.isfile(client_initrd_path):
|
||||
#os.makedirs(temp_dir, exist_ok=True)
|
||||
os.chdir(self.temp_dir.name)
|
||||
self.__logger.debug("Descomprimiendo %s", client_initrd_path)
|
||||
public_key = None
|
||||
with libarchive.file_reader(client_initrd_path) as initrd:
|
||||
for file in initrd:
|
||||
self.__logger.debug("Archivo: %s", file)
|
||||
self.__logger.error("Failed to run command: %s, return code %i", cmd, ret.returncode)
|
||||
self.__logger.error("stdout: %s", ret.stdout)
|
||||
self.__logger.error("stderr: %s", ret.stderr)
|
||||
raise RuntimeError("Failed to run necessary command")
|
||||
|
||||
if file.pathname in self.key_paths_dict:
|
||||
data = bytearray()
|
||||
for block in file.get_blocks():
|
||||
data = data + block
|
||||
public_key = data.decode('utf-8').strip()
|
||||
run_forge_cmd(["admin", "doctor", "check"])
|
||||
|
||||
break
|
||||
else:
|
||||
print(f"No se encuentra la imagen de initrd {client_initrd_path}")
|
||||
exit(2)
|
||||
run_forge_cmd(["admin", "user", "create", "--username", self.forgejo_user, "--password", self.forgejo_password, "--email", self.email])
|
||||
|
||||
# Si la clave publica no existe me salgo con error
|
||||
if not public_key:
|
||||
raise RequirementException(f"No se encuentra clave pública dentro del ogLive en {self.temp_dir}, imagen {client_initrd_path}. Rutas buscadas: {self.key_paths}\n" +
|
||||
"Los oglive deben tener la misma clave pública (utilizar setsslkey)")
|
||||
token = run_forge_cmd(["admin", "user", "generate-access-token", "--username", self.forgejo_user, "-t", "gitapi", "--scopes", "all", "--raw"])
|
||||
|
||||
with open(os.path.join(self.base_path, "etc", "ogGitApiToken.cfg"), "w+", encoding='utf-8') as token_file:
|
||||
token_file.write(token)
|
||||
|
||||
|
||||
ssh_dir = os.path.join(self.ssh_homedir, ".ssh")
|
||||
authorized_keys_file = os.path.join(ssh_dir, "authorized_keys")
|
||||
ssh_key = self._extract_ssh_key_from_initrd()
|
||||
|
||||
self.__logger.debug("Configurando ssh: Agregando clave %s a %s", public_key, authorized_keys_file)
|
||||
self.__logger.debug("Key: %s", public_key)
|
||||
os.makedirs(ssh_dir, exist_ok=True)
|
||||
self._add_line_to_file(authorized_keys_file, public_key)
|
||||
self.add_forgejo_sshkey(ssh_key, "Default key")
|
||||
|
||||
os.chmod(authorized_keys_file, 0o600)
|
||||
os.chown(ssh_dir, uid=self.ssh_uid, gid=self.ssh_gid)
|
||||
os.chown(authorized_keys_file, uid=self.ssh_uid, gid=self.ssh_gid)
|
||||
|
||||
# Configuramos el servicio ssh para que permita el acceso con clave pública
|
||||
self.__logger.info(" Configuramos el servicio ssh para que permita el acceso con clave pública.")
|
||||
with open("/etc/ssh/sshd_config", "r") as f:
|
||||
sshd_config = f.read()
|
||||
sshd_config = sshd_config.replace("PubkeyAuthentication no", "PubkeyAuthentication yes")
|
||||
with open("/etc/ssh/sshd_config", "w") as f:
|
||||
f.write(sshd_config)
|
||||
os.system("systemctl reload ssh")
|
||||
def add_forgejo_repo(self, repository_name, description = ""):
|
||||
token = ""
|
||||
with open(os.path.join(self.base_path, "etc", "ogGitApiToken.cfg"), "r", encoding='utf-8') as token_file:
|
||||
token = token_file.read().strip()
|
||||
|
||||
# Instalamos git
|
||||
os.system("apt install git")
|
||||
self.__logger.info("Adding repository %s for Forgejo", repository_name)
|
||||
|
||||
# Para que el usuario sólo pueda usar git (no ssh)
|
||||
SHELL = shutil.which("git-shell")
|
||||
os.system(f"usermod -s {SHELL} opengnsys")
|
||||
r = requests.post(
|
||||
f"http://localhost:{self.forgejo_port}/api/v1/user/repos",
|
||||
json={
|
||||
"auto_init" : False,
|
||||
"default_branch" : "main",
|
||||
"description" : description,
|
||||
"name" : repository_name,
|
||||
"private" : False
|
||||
}, headers={
|
||||
'Authorization' : f"token {token}"
|
||||
},
|
||||
timeout = 60
|
||||
)
|
||||
|
||||
# Creamos repositorios
|
||||
self._init_git_repo('windows.git')
|
||||
self._init_git_repo('linux.git')
|
||||
self._init_git_repo('mac.git')
|
||||
self.__logger.info("Request status was %i, content %s", r.status_code, r.content)
|
||||
|
||||
# Damos permiso al usuario opengnsys
|
||||
for DIR in ["base.git", "linux.git", "windows.git"]: #, "LinAcl", "WinAcl"]:
|
||||
self._recursive_chown(os.path.join(ogdir_images, DIR), ouid=self.ssh_uid, ogid=self.ssh_gid)
|
||||
def add_forgejo_sshkey(self, pubkey, description = ""):
|
||||
token = ""
|
||||
with open(os.path.join(self.base_path, "etc", "ogGitApiToken.cfg"), "r", encoding='utf-8') as token_file:
|
||||
token = token_file.read().strip()
|
||||
|
||||
self.__logger.info("Adding SSH key to Forgejo: %s (%s)", pubkey, description)
|
||||
|
||||
r = requests.post(
|
||||
f"http://localhost:{self.forgejo_port}/api/v1/user/keys",
|
||||
json={
|
||||
"key" : pubkey,
|
||||
"read_only" : False,
|
||||
"title" : description
|
||||
}, headers={
|
||||
'Authorization' : f"token {token}"
|
||||
},
|
||||
timeout = 60
|
||||
)
|
||||
|
||||
self.__logger.info("Request status was %i, content %s", r.status_code, r.content)
|
||||
|
||||
def add_forgejo_organization(self, pubkey, description = ""):
|
||||
token = ""
|
||||
with open(os.path.join(self.base_path, "etc", "ogGitApiToken.cfg"), "r", encoding='utf-8') as token_file:
|
||||
token = token_file.read().strip()
|
||||
|
||||
self.__logger.info("Adding SSH key to Forgejo: %s", pubkey)
|
||||
|
||||
r = requests.post(
|
||||
f"http://localhost:{self.forgejo_port}/api/v1/user/keys",
|
||||
json={
|
||||
"key" : pubkey,
|
||||
"read_only" : False,
|
||||
"title" : description
|
||||
}, headers={
|
||||
'Authorization' : f"token {token}"
|
||||
},
|
||||
timeout = 60
|
||||
)
|
||||
|
||||
self.__logger.info("Request status was %i, content %s", r.status_code, r.content)
|
||||
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(name)20s - [%(levelname)5s] - %(message)s')
|
||||
logger = logging.getLogger(__name__)
|
||||
sys.stdout.reconfigure(encoding='utf-8')
|
||||
|
||||
opengnsys_log_dir = "/opt/opengnsys/log"
|
||||
|
||||
logger = logging.getLogger(__package__)
|
||||
logger.setLevel(logging.DEBUG)
|
||||
logger.info("Inicio del programa")
|
||||
|
||||
streamLog = logging.StreamHandler()
|
||||
streamLog.setLevel(logging.INFO)
|
||||
|
||||
if not os.path.exists(opengnsys_log_dir):
|
||||
os.mkdir(opengnsys_log_dir)
|
||||
|
||||
logFilePath = f"{opengnsys_log_dir}/git_installer.log"
|
||||
fileLog = logging.FileHandler(logFilePath)
|
||||
fileLog.setLevel(logging.DEBUG)
|
||||
|
||||
formatter = logging.Formatter('%(asctime)s - %(name)24s - [%(levelname)5s] - %(message)s')
|
||||
|
||||
streamLog.setFormatter(formatter)
|
||||
fileLog.setFormatter(formatter)
|
||||
|
||||
logger.addHandler(streamLog)
|
||||
logger.addHandler(fileLog)
|
||||
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
prog="OpenGnsys Installer",
|
||||
description="Script para la instalación del repositorio git",
|
||||
)
|
||||
parser.add_argument('--forgejo-only', action='store_true', help="Solo instalar forgejo")
|
||||
parser.add_argument('--forgejo-addrepos', action='store_true', help="Solo agregar repositorios forgejo")
|
||||
|
||||
parser.add_argument('--testmode', action='store_true', help="Modo de prueba")
|
||||
parser.add_argument('--ignoresshkey', action='store_true', help="Ignorar clave de SSH")
|
||||
parser.add_argument('--usesshkey', type=str, help="Usar clave SSH especificada")
|
||||
parser.add_argument('--test-createuser', action='store_true')
|
||||
parser.add_argument('--extract-ssh-key', action='store_true', help="Extract SSH key from oglive squashfs")
|
||||
parser.add_argument('--set-ssh-key', action='store_true', help="Read SSH key from oglive squashfs and set it in Forgejo")
|
||||
|
||||
parser.add_argument('--extract-ssh-key-from-initrd', action='store_true', help="Extract SSH key from oglive initrd (obsolete)")
|
||||
parser.add_argument('--set-ssh-key-in-initrd', action='store_true', help="Configure SSH key in oglive (obsolete)")
|
||||
parser.add_argument('--oglive', type=int, metavar='NUM', help = "Do SSH key manipulation on this oglive")
|
||||
parser.add_argument('--quiet', action='store_true', help="Quiet console output")
|
||||
parser.add_argument("-v", "--verbose", action="store_true", help = "Verbose console output")
|
||||
|
||||
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.quiet:
|
||||
streamLog.setLevel(logging.WARNING)
|
||||
|
||||
if args.verbose:
|
||||
streamLog.setLevel(logging.DEBUG)
|
||||
|
||||
installer = OpengnsysGitInstaller()
|
||||
installer.set_testmode(args.testmode)
|
||||
installer.set_ignoresshkey(args.ignoresshkey)
|
||||
|
@ -364,7 +853,30 @@ if __name__ == '__main__':
|
|||
logger.debug("Inicio de instalación")
|
||||
|
||||
try:
|
||||
installer.install()
|
||||
if args.forgejo_only:
|
||||
installer.install_forgejo()
|
||||
elif args.forgejo_addrepos:
|
||||
installer.add_forgejo_repo("linux")
|
||||
elif args.test_createuser:
|
||||
installer.set_ssh_user_group("oggit2", "oggit2")
|
||||
elif args.extract_ssh_key:
|
||||
keys = installer.extract_ssh_keys(oglive_num = args.oglive)
|
||||
print(f"{keys}")
|
||||
elif args.extract_ssh_key_from_initrd:
|
||||
key = installer._extract_ssh_key_from_initrd()
|
||||
print(f"{key}")
|
||||
elif args.set_ssh_key:
|
||||
installer.add_ssh_key_from_squashfs(oglive_num=args.oglive)
|
||||
elif args.set_ssh_key_in_initrd:
|
||||
installer.set_ssh_key_in_initrd()
|
||||
else:
|
||||
installer.install()
|
||||
installer.install_forgejo()
|
||||
|
||||
installer.add_forgejo_repo("windows", "Windows")
|
||||
installer.add_forgejo_repo("linux", "Linux")
|
||||
installer.add_forgejo_repo("mac", "Mac")
|
||||
|
||||
except RequirementException as req:
|
||||
show_error(f"Requisito para la instalación no satisfecho: {req.message}")
|
||||
exit(1)
|
||||
|
|
|
@ -0,0 +1,17 @@
|
|||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
git clone https://github.com/dchevell/flask-executor opengnsys-flask-executor
|
||||
cd opengnsys-flask-executor
|
||||
version=`python3 ./setup.py --version`
|
||||
cd ..
|
||||
|
||||
if [ -d "opengnsys-flask-executor-${version}" ] ; then
|
||||
echo "Directory opengnsys-flask-executor-${version} already exists, won't overwrite"
|
||||
exit 1
|
||||
else
|
||||
rm -rf opengnsys-flask-executor/.git
|
||||
mv opengnsys-flask-executor "opengnsys-flask-executor-${version}"
|
||||
tar -c --xz -v -f "opengnsys-flask-executor_${version}.orig.tar.xz" "opengnsys-flask-executor-${version}"
|
||||
fi
|
||||
|
|
@ -0,0 +1,28 @@
|
|||
name: Flask-Executor tests
|
||||
|
||||
on: [push]
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
python-version: ["3.7", "3.8", "3.9", "3.10"]
|
||||
flask-version: ["<2.2", ">=2.2"]
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install -q "flask ${{ matrix.flask-version }}"
|
||||
pip install -e .[test]
|
||||
- name: Test with pytest
|
||||
run: |
|
||||
pytest --cov=flask_executor/ --cov-report=xml
|
||||
- name: Upload coverage to Codecov
|
||||
uses: codecov/codecov-action@v3
|
|
@ -0,0 +1,105 @@
|
|||
# Byte-compiled / optimized / DLL files
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
|
||||
# C extensions
|
||||
*.so
|
||||
|
||||
# Distribution / packaging
|
||||
.Python
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
wheels/
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
MANIFEST
|
||||
|
||||
# PyInstaller
|
||||
# Usually these files are written by a python script from a template
|
||||
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
||||
*.manifest
|
||||
*.spec
|
||||
|
||||
# Installer logs
|
||||
pip-log.txt
|
||||
pip-delete-this-directory.txt
|
||||
|
||||
# Unit test / coverage reports
|
||||
htmlcov/
|
||||
.tox/
|
||||
.coverage
|
||||
.coverage.*
|
||||
.cache
|
||||
nosetests.xml
|
||||
coverage.xml
|
||||
*.cover
|
||||
*,cover
|
||||
.hypothesis/
|
||||
.pytest_cache/
|
||||
|
||||
# Translations
|
||||
*.mo
|
||||
*.pot
|
||||
|
||||
# Django stuff:
|
||||
*.log
|
||||
local_settings.py
|
||||
db.sqlite3
|
||||
|
||||
# Flask stuff:
|
||||
instance/
|
||||
.webassets-cache
|
||||
|
||||
# Scrapy stuff:
|
||||
.scrapy
|
||||
|
||||
# Sphinx documentation
|
||||
docs/_build/
|
||||
|
||||
# PyBuilder
|
||||
target/
|
||||
|
||||
# Jupyter Notebook
|
||||
.ipynb_checkpoints
|
||||
|
||||
# pyenv
|
||||
.python-version
|
||||
|
||||
# celery beat schedule file
|
||||
celerybeat-schedule
|
||||
|
||||
# SageMath parsed files
|
||||
*.sage.py
|
||||
|
||||
# Environments
|
||||
.env
|
||||
.venv
|
||||
env/
|
||||
venv/
|
||||
ENV/
|
||||
env.bak/
|
||||
venv.bak/
|
||||
|
||||
# Spyder project settings
|
||||
.spyderproject
|
||||
.spyproject
|
||||
|
||||
# Rope project settings
|
||||
.ropeproject
|
||||
|
||||
# mkdocs documentation
|
||||
/site
|
||||
|
||||
# mypy
|
||||
.mypy_cache/
|
|
@ -0,0 +1,21 @@
|
|||
MIT License
|
||||
|
||||
Copyright (c) 2018 Dave Chevell
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
|
@ -0,0 +1,134 @@
|
|||
Flask-Executor
|
||||
==============
|
||||
|
||||
[](https://github.com/dchevell/flask-executor/actions/workflows/tests.yml)
|
||||
[](https://codecov.io/gh/dchevell/flask-executor)
|
||||
[](https://pypi.python.org/pypi/Flask-Executor)
|
||||
[](https://github.com/dchevell/flask-executor/blob/master/LICENSE)
|
||||
|
||||
Sometimes you need a simple task queue without the overhead of separate worker processes or powerful-but-complex libraries beyond your requirements. Flask-Executor is an easy to use wrapper for the `concurrent.futures` module that lets you initialise and configure executors via common Flask application patterns. It's a great way to get up and running fast with a lightweight in-process task queue.
|
||||
|
||||
Installation
|
||||
------------
|
||||
|
||||
Flask-Executor is available on PyPI and can be installed with:
|
||||
|
||||
pip install flask-executor
|
||||
|
||||
|
||||
Quick start
|
||||
-----------
|
||||
|
||||
Here's a quick example of using Flask-Executor inside your Flask application:
|
||||
|
||||
```python
|
||||
from flask import Flask
|
||||
from flask_executor import Executor
|
||||
|
||||
app = Flask(__name__)
|
||||
|
||||
executor = Executor(app)
|
||||
|
||||
|
||||
def send_email(recipient, subject, body):
|
||||
# Magic to send an email
|
||||
return True
|
||||
|
||||
|
||||
@app.route('/signup')
|
||||
def signup():
|
||||
# Do signup form
|
||||
executor.submit(send_email, recipient, subject, body)
|
||||
```
|
||||
|
||||
|
||||
Contexts
|
||||
--------
|
||||
|
||||
When calling `submit()` or `map()` Flask-Executor will wrap `ThreadPoolExecutor` callables with a
|
||||
copy of both the current application context and current request context. Code that must be run in
|
||||
these contexts or that depends on information or configuration stored in `flask.current_app`,
|
||||
`flask.request` or `flask.g` can be submitted to the executor without modification.
|
||||
|
||||
Note: due to limitations in Python's default object serialisation and a lack of shared memory space between subprocesses, contexts cannot be pushed to `ProcessPoolExecutor()` workers.
|
||||
|
||||
|
||||
Futures
|
||||
-------
|
||||
|
||||
You may want to preserve access to Futures returned from the executor, so that you can retrieve the
|
||||
results in a different part of your application. Flask-Executor allows Futures to be stored within
|
||||
the executor itself and provides methods for querying and returning them in different parts of your
|
||||
app::
|
||||
|
||||
```python
|
||||
@app.route('/start-task')
|
||||
def start_task():
|
||||
executor.submit_stored('calc_power', pow, 323, 1235)
|
||||
return jsonify({'result':'success'})
|
||||
|
||||
@app.route('/get-result')
|
||||
def get_result():
|
||||
if not executor.futures.done('calc_power'):
|
||||
return jsonify({'status': executor.futures._state('calc_power')})
|
||||
future = executor.futures.pop('calc_power')
|
||||
return jsonify({'status': done, 'result': future.result()})
|
||||
```
|
||||
|
||||
|
||||
Decoration
|
||||
----------
|
||||
|
||||
Flask-Executor lets you decorate methods in the same style as distributed task queues like
|
||||
Celery:
|
||||
|
||||
```python
|
||||
@executor.job
|
||||
def fib(n):
|
||||
if n <= 2:
|
||||
return 1
|
||||
else:
|
||||
return fib(n-1) + fib(n-2)
|
||||
|
||||
@app.route('/decorate_fib')
|
||||
def decorate_fib():
|
||||
fib.submit(5)
|
||||
fib.submit_stored('fibonacci', 5)
|
||||
fib.map(range(1, 6))
|
||||
return 'OK'
|
||||
```
|
||||
|
||||
|
||||
Default Callbacks
|
||||
-----------------
|
||||
|
||||
Future objects can have callbacks attached by using `Future.add_done_callback`. Flask-Executor
|
||||
lets you specify default callbacks that will be applied to all new futures created by the executor:
|
||||
|
||||
```python
|
||||
def some_callback(future):
|
||||
# do something with future
|
||||
|
||||
executor.add_default_done_callback(some_callback)
|
||||
|
||||
# Callback will be added to the below task automatically
|
||||
executor.submit(pow, 323, 1235)
|
||||
```
|
||||
|
||||
|
||||
Propagate Exceptions
|
||||
--------------------
|
||||
|
||||
Normally any exceptions thrown by background threads or processes will be swallowed unless explicitly
|
||||
checked for. To instead surface all exceptions thrown by background tasks, Flask-Executor can add
|
||||
a special default callback that raises any exceptions thrown by tasks submitted to the executor::
|
||||
|
||||
```python
|
||||
app.config['EXECUTOR_PROPAGATE_EXCEPTIONS'] = True
|
||||
```
|
||||
|
||||
|
||||
Documentation
|
||||
-------------
|
||||
|
||||
Check out the full documentation at [flask-executor.readthedocs.io](https://flask-executor.readthedocs.io)!
|
|
@ -0,0 +1,7 @@
|
|||
opengnsys-flask-executor (0.10.0) UNRELEASED; urgency=medium
|
||||
|
||||
Initial version
|
||||
*
|
||||
*
|
||||
|
||||
-- Vadim Troshchinskiy <vtroshchinskiy@qindel.com> Tue, 23 Dec 2024 10:47:04 +0000
|
|
@ -0,0 +1,28 @@
|
|||
Source: opengnsys-flask-executor
|
||||
Maintainer: OpenGnsys <opengnsys@opengnsys.org>
|
||||
Section: python
|
||||
Priority: optional
|
||||
Build-Depends: debhelper-compat (= 12),
|
||||
dh-python,
|
||||
libarchive-dev,
|
||||
python3-all,
|
||||
python3-mock,
|
||||
python3-pytest,
|
||||
python3-setuptools
|
||||
Standards-Version: 4.5.0
|
||||
Rules-Requires-Root: no
|
||||
Homepage: https://github.com/vojtechtrefny/pyblkid
|
||||
Vcs-Browser: https://github.com/vojtechtrefny/pyblkid
|
||||
Vcs-Git: https://github.com/vojtechtrefny/pyblkid
|
||||
|
||||
Package: opengnsys-flask-executor
|
||||
Architecture: all
|
||||
Depends: ${lib:Depends}, ${misc:Depends}, ${python3:Depends}
|
||||
Description: Python3 Flask-Executor module
|
||||
Sometimes you need a simple task queue without the overhead of separate worker
|
||||
processes or powerful-but-complex libraries beyond your requirements.
|
||||
.
|
||||
Flask-Executor is an easy to use wrapper for the concurrent.futures module that
|
||||
lets you initialise and configure executors via common Flask application patterns.
|
||||
It's a great way to get up and running fast with a lightweight in-process task queue.
|
||||
.
|
|
@ -0,0 +1,208 @@
|
|||
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
|
||||
Upstream-Name: python-libarchive-c
|
||||
Source: https://github.com/Changaco/python-libarchive-c
|
||||
|
||||
Files: *
|
||||
Copyright: 2014-2018 Changaco <changaco@changaco.oy.lc>
|
||||
License: CC-0
|
||||
|
||||
Files: tests/surrogateescape.py
|
||||
Copyright: 2015 Changaco <changaco@changaco.oy.lc>
|
||||
2011-2013 Victor Stinner <victor.stinner@gmail.com>
|
||||
License: BSD-2-clause or PSF-2
|
||||
|
||||
Files: debian/*
|
||||
Copyright: 2015 Jerémy Bobbio <lunar@debian.org>
|
||||
2019 Mattia Rizzolo <mattia@debian.org>
|
||||
License: permissive
|
||||
Copying and distribution of this package, with or without
|
||||
modification, are permitted in any medium without royalty
|
||||
provided the copyright notice and this notice are
|
||||
preserved.
|
||||
|
||||
License: BSD-2-clause
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions
|
||||
are met:
|
||||
* Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
* Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions and the following disclaimer in
|
||||
the documentation and/or other materials provided with the
|
||||
distribution.
|
||||
.
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
|
||||
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
|
||||
COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
|
||||
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
|
||||
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
|
||||
OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
|
||||
AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
||||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
|
||||
OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
||||
SUCH DAMAGE.
|
||||
|
||||
License: PSF-2
|
||||
1. This LICENSE AGREEMENT is between the Python Software Foundation ("PSF"),
|
||||
and the Individual or Organization ("Licensee") accessing and otherwise using
|
||||
this software ("Python") in source or binary form and its associated
|
||||
documentation.
|
||||
.
|
||||
2. Subject to the terms and conditions of this License Agreement, PSF hereby
|
||||
grants Licensee a nonexclusive, royalty-free, world-wide license to
|
||||
reproduce, analyze, test, perform and/or display publicly, prepare derivative
|
||||
works, distribute, and otherwise use Python alone or in any derivative
|
||||
version, provided, however, that PSF's License Agreement and PSF's notice of
|
||||
copyright, i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006 Python
|
||||
Software Foundation; All Rights Reserved" are retained in Python alone or in
|
||||
any derivative version prepared by Licensee.
|
||||
.
|
||||
3. In the event Licensee prepares a derivative work that is based on or
|
||||
incorporates Python or any part thereof, and wants to make the derivative
|
||||
work available to others as provided herein, then Licensee hereby agrees to
|
||||
include in any such work a brief summary of the changes made to Python.
|
||||
.
|
||||
4. PSF is making Python available to Licensee on an "AS IS" basis. PSF MAKES
|
||||
NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT
|
||||
NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF
|
||||
MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF
|
||||
PYTHON WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.
|
||||
.
|
||||
5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON FOR ANY
|
||||
INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF
|
||||
MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, OR ANY DERIVATIVE
|
||||
THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
|
||||
.
|
||||
6. This License Agreement will automatically terminate upon a material breach
|
||||
of its terms and conditions.
|
||||
.
|
||||
7. Nothing in this License Agreement shall be deemed to create any
|
||||
relationship of agency, partnership, or joint venture between PSF and
|
||||
Licensee. This License Agreement does not grant permission to use PSF
|
||||
trademarks or trade name in a trademark sense to endorse or promote products
|
||||
or services of Licensee, or any third party.
|
||||
.
|
||||
8. By copying, installing or otherwise using Python, Licensee agrees to be
|
||||
bound by the terms and conditions of this License Agreement.
|
||||
|
||||
License: CC-0
|
||||
Statement of Purpose
|
||||
.
|
||||
The laws of most jurisdictions throughout the world automatically
|
||||
confer exclusive Copyright and Related Rights (defined below) upon
|
||||
the creator and subsequent owner(s) (each and all, an "owner") of an
|
||||
original work of authorship and/or a database (each, a "Work").
|
||||
.
|
||||
Certain owners wish to permanently relinquish those rights to a Work
|
||||
for the purpose of contributing to a commons of creative, cultural
|
||||
and scientific works ("Commons") that the public can reliably and
|
||||
without fear of later claims of infringement build upon, modify,
|
||||
incorporate in other works, reuse and redistribute as freely as
|
||||
possible in any form whatsoever and for any purposes, including
|
||||
without limitation commercial purposes. These owners may contribute
|
||||
to the Commons to promote the ideal of a free culture and the further
|
||||
production of creative, cultural and scientific works, or to gain
|
||||
reputation or greater distribution for their Work in part through the
|
||||
use and efforts of others.
|
||||
.
|
||||
For these and/or other purposes and motivations, and without any
|
||||
expectation of additional consideration or compensation, the person
|
||||
associating CC0 with a Work (the "Affirmer"), to the extent that he
|
||||
or she is an owner of Copyright and Related Rights in the Work,
|
||||
voluntarily elects to apply CC0 to the Work and publicly distribute
|
||||
the Work under its terms, with knowledge of his or her Copyright and
|
||||
Related Rights in the Work and the meaning and intended legal effect
|
||||
of CC0 on those rights.
|
||||
.
|
||||
1. Copyright and Related Rights. A Work made available under CC0 may
|
||||
be protected by copyright and related or neighboring rights
|
||||
("Copyright and Related Rights"). Copyright and Related Rights
|
||||
include, but are not limited to, the following:
|
||||
.
|
||||
i. the right to reproduce, adapt, distribute, perform, display,
|
||||
communicate, and translate a Work;
|
||||
ii. moral rights retained by the original author(s) and/or
|
||||
performer(s);
|
||||
iii. publicity and privacy rights pertaining to a person's image
|
||||
or likeness depicted in a Work;
|
||||
iv. rights protecting against unfair competition in regards to a
|
||||
Work, subject to the limitations in paragraph 4(a), below;
|
||||
v. rights protecting the extraction, dissemination, use and
|
||||
reuse of data in a Work;
|
||||
vi. database rights (such as those arising under Directive
|
||||
96/9/EC of the European Parliament and of the Council of 11
|
||||
March 1996 on the legal protection of databases, and under
|
||||
any national implementation thereof, including any amended or
|
||||
successor version of such directive); and
|
||||
vii. other similar, equivalent or corresponding rights throughout
|
||||
the world based on applicable law or treaty, and any national
|
||||
implementations thereof.
|
||||
.
|
||||
2. Waiver. To the greatest extent permitted by, but not in
|
||||
contravention of, applicable law, Affirmer hereby overtly, fully,
|
||||
permanently, irrevocably and unconditionally waives, abandons, and
|
||||
surrenders all of Affirmer's Copyright and Related Rights and
|
||||
associated claims and causes of action, whether now known or
|
||||
unknown (including existing as well as future claims and causes of
|
||||
action), in the Work (i) in all territories worldwide, (ii) for
|
||||
the maximum duration provided by applicable law or treaty
|
||||
(including future time extensions), (iii) in any current or future
|
||||
medium and for any number of copies, and (iv) for any purpose
|
||||
whatsoever, including without limitation commercial, advertising
|
||||
or promotional purposes (the "Waiver"). Affirmer makes the Waiver
|
||||
for the benefit of each member of the public at large and to the
|
||||
detriment of Affirmer's heirs and successors, fully intending that
|
||||
such Waiver shall not be subject to revocation, rescission,
|
||||
cancellation, termination, or any other legal or equitable action
|
||||
to disrupt the quiet enjoyment of the Work by the public as
|
||||
contemplated by Affirmer's express Statement of Purpose.
|
||||
.
|
||||
3. Public License Fallback. Should any part of the Waiver for any
|
||||
reason be judged legally invalid or ineffective under applicable law,
|
||||
then the Waiver shall be preserved to the maximum extent permitted
|
||||
taking into account Affirmer's express Statement of Purpose. In
|
||||
addition, to the extent the Waiver is so judged Affirmer hereby
|
||||
grants to each affected person a royalty-free, non transferable, non
|
||||
sublicensable, non exclusive, irrevocable and unconditional license
|
||||
to exercise Affirmer's Copyright and Related Rights in the Work (i)
|
||||
in all territories worldwide, (ii) for the maximum duration provided
|
||||
by applicable law or treaty (including future time extensions), (iii)
|
||||
in any current or future medium and for any number of copies, and
|
||||
(iv) for any purpose whatsoever, including without limitation
|
||||
commercial, advertising or promotional purposes (the "License"). The
|
||||
License shall be deemed effective as of the date CC0 was applied by
|
||||
Affirmer to the Work. Should any part of the License for any reason
|
||||
be judged legally invalid or ineffective under applicable law, such
|
||||
partial invalidity or ineffectiveness shall not invalidate the
|
||||
remainder of the License, and in such case Affirmer hereby affirms
|
||||
that he or she will not (i) exercise any of his or her remaining
|
||||
Copyright and Related Rights in the Work or (ii) assert any
|
||||
associated claims and causes of action with respect to the Work, in
|
||||
either case contrary to Affirmer's express Statement of Purpose.
|
||||
.
|
||||
4. Limitations and Disclaimers.
|
||||
.
|
||||
a. No trademark or patent rights held by Affirmer are waived,
|
||||
abandoned, surrendered, licensed or otherwise affected by
|
||||
this document.
|
||||
b. Affirmer offers the Work as-is and makes no representations
|
||||
or warranties of any kind concerning the Work, express,
|
||||
implied, statutory or otherwise, including without limitation
|
||||
warranties of title, merchantability, fitness for a
|
||||
particular purpose, non infringement, or the absence of
|
||||
latent or other defects, accuracy, or the present or absence
|
||||
of errors, whether or not discoverable, all to the greatest
|
||||
extent permissible under applicable law.
|
||||
c. Affirmer disclaims responsibility for clearing rights of
|
||||
other persons that may apply to the Work or any use thereof,
|
||||
including without limitation any person's Copyright and
|
||||
Related Rights in the Work. Further, Affirmer disclaims
|
||||
responsibility for obtaining any necessary consents,
|
||||
permissions or other rights required for any use of the
|
||||
Work.
|
||||
d. Affirmer understands and acknowledges that Creative Commons
|
||||
is not a party to this document and has no duty or obligation
|
||||
with respect to this CC0 or use of the Work.
|
||||
|
|
@ -0,0 +1,22 @@
|
|||
#!/usr/bin/make -f
|
||||
|
||||
export LC_ALL=C.UTF-8
|
||||
export PYBUILD_NAME = libarchive-c
|
||||
#export PYBUILD_BEFORE_TEST = cp -av README.rst {build_dir}
|
||||
export PYBUILD_TEST_ARGS = -vv -s
|
||||
#export PYBUILD_AFTER_TEST = rm -v {build_dir}/README.rst
|
||||
# ./usr/lib/python3/dist-packages/libarchive/
|
||||
export PYBUILD_INSTALL_ARGS=--install-lib=/usr/share/opengnsys-modules/python3/dist-packages/
|
||||
%:
|
||||
dh $@ --with python3 --buildsystem=pybuild
|
||||
|
||||
override_dh_gencontrol:
|
||||
dh_gencontrol -- \
|
||||
-Vlib:Depends=$(shell dpkg-query -W -f '$${Depends}' libarchive-dev \
|
||||
| sed -E 's/.*(libarchive[[:alnum:].-]+).*/\1/')
|
||||
|
||||
override_dh_installdocs:
|
||||
# Nothing, we don't want docs
|
||||
|
||||
override_dh_installchangelogs:
|
||||
# Nothing, we don't want the changelog
|
|
@ -0,0 +1 @@
|
|||
3.0 (quilt)
|
|
@ -0,0 +1,2 @@
|
|||
Tests: upstream-tests
|
||||
Depends: @, python3-mock, python3-pytest
|
|
@ -0,0 +1,14 @@
|
|||
#!/bin/sh
|
||||
|
||||
set -e
|
||||
|
||||
if ! [ -d "$AUTOPKGTEST_TMP" ]; then
|
||||
echo "AUTOPKGTEST_TMP not set." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cp -rv tests "$AUTOPKGTEST_TMP"
|
||||
cd "$AUTOPKGTEST_TMP"
|
||||
mkdir -v libarchive
|
||||
touch README.rst
|
||||
py.test-3 tests -vv -l -r a
|
|
@ -0,0 +1,20 @@
|
|||
# Minimal makefile for Sphinx documentation
|
||||
#
|
||||
|
||||
# You can set these variables from the command line.
|
||||
SPHINXOPTS =
|
||||
SPHINXBUILD = sphinx-build
|
||||
SPHINXPROJ = Flask-Executor
|
||||
SOURCEDIR = .
|
||||
BUILDDIR = _build
|
||||
|
||||
# Put it first so that "make" without argument is like "make help".
|
||||
help:
|
||||
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
||||
|
||||
.PHONY: help Makefile
|
||||
|
||||
# Catch-all target: route all unknown targets to Sphinx using the new
|
||||
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
|
||||
%: Makefile
|
||||
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
|
@ -0,0 +1,30 @@
|
|||
flask\_executor package
|
||||
=======================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
flask\_executor.executor module
|
||||
-------------------------------
|
||||
|
||||
.. automodule:: flask_executor.executor
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
flask\_executor.futures module
|
||||
------------------------------
|
||||
|
||||
.. automodule:: flask_executor.futures
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: flask_executor
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -0,0 +1,7 @@
|
|||
flask_executor
|
||||
==============
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 4
|
||||
|
||||
flask_executor
|
|
@ -0,0 +1,172 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Configuration file for the Sphinx documentation builder.
|
||||
#
|
||||
# This file does only contain a selection of the most common options. For a
|
||||
# full list see the documentation:
|
||||
# http://www.sphinx-doc.org/en/master/config
|
||||
|
||||
# -- Path setup --------------------------------------------------------------
|
||||
|
||||
# If extensions (or modules to document with autodoc) are in another directory,
|
||||
# add these directories to sys.path here. If the directory is relative to the
|
||||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||
#
|
||||
import os
|
||||
import sys
|
||||
|
||||
from flask_executor import __version__
|
||||
|
||||
sys.path.insert(0, os.path.abspath('..'))
|
||||
|
||||
|
||||
# -- Project information -----------------------------------------------------
|
||||
|
||||
project = 'Flask-Executor'
|
||||
copyright = '2018, Dave Chevell'
|
||||
author = 'Dave Chevell'
|
||||
|
||||
# The short X.Y version
|
||||
version = '.'.join(__version__.split('.')[:2])
|
||||
# The full version, including alpha/beta/rc tags
|
||||
release = __version__
|
||||
|
||||
|
||||
# -- General configuration ---------------------------------------------------
|
||||
|
||||
# If your documentation needs a minimal Sphinx version, state it here.
|
||||
#
|
||||
# needs_sphinx = '1.0'
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be
|
||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
||||
# ones.
|
||||
extensions = [
|
||||
'sphinx.ext.autodoc',
|
||||
'sphinx.ext.intersphinx',
|
||||
'sphinx.ext.viewcode',
|
||||
]
|
||||
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
templates_path = ['_templates']
|
||||
|
||||
# The suffix(es) of source filenames.
|
||||
# You can specify multiple suffix as a list of string:
|
||||
#
|
||||
# source_suffix = ['.rst', '.md']
|
||||
source_suffix = '.rst'
|
||||
|
||||
# The master toctree document.
|
||||
master_doc = 'index'
|
||||
|
||||
# The language for content autogenerated by Sphinx. Refer to documentation
|
||||
# for a list of supported languages.
|
||||
#
|
||||
# This is also used if you do content translation via gettext catalogs.
|
||||
# Usually you set "language" from the command line for these cases.
|
||||
language = None
|
||||
|
||||
# List of patterns, relative to source directory, that match files and
|
||||
# directories to ignore when looking for source files.
|
||||
# This pattern also affects html_static_path and html_extra_path .
|
||||
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
|
||||
|
||||
# The name of the Pygments (syntax highlighting) style to use.
|
||||
pygments_style = 'sphinx'
|
||||
|
||||
|
||||
# -- Options for HTML output -------------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages. See the documentation for
|
||||
# a list of builtin themes.
|
||||
#
|
||||
html_theme = 'alabaster'
|
||||
|
||||
# Theme options are theme-specific and customize the look and feel of a theme
|
||||
# further. For a list of options available for each theme, see the
|
||||
# documentation.
|
||||
#
|
||||
# html_theme_options = {}
|
||||
|
||||
# Add any paths that contain custom static files (such as style sheets) here,
|
||||
# relative to this directory. They are copied after the builtin static files,
|
||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||
# html_static_path = ['_static']
|
||||
|
||||
# Custom sidebar templates, must be a dictionary that maps document names
|
||||
# to template names.
|
||||
#
|
||||
# The default sidebars (for documents that don't match any pattern) are
|
||||
# defined by theme itself. Builtin themes are using these templates by
|
||||
# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
|
||||
# 'searchbox.html']``.
|
||||
#
|
||||
# html_sidebars = {}
|
||||
|
||||
|
||||
# -- Options for HTMLHelp output ---------------------------------------------
|
||||
|
||||
# Output file base name for HTML help builder.
|
||||
htmlhelp_basename = 'Flask-Executordoc'
|
||||
|
||||
|
||||
# -- Options for LaTeX output ------------------------------------------------
|
||||
|
||||
latex_elements = {
|
||||
# The paper size ('letterpaper' or 'a4paper').
|
||||
#
|
||||
# 'papersize': 'letterpaper',
|
||||
|
||||
# The font size ('10pt', '11pt' or '12pt').
|
||||
#
|
||||
# 'pointsize': '10pt',
|
||||
|
||||
# Additional stuff for the LaTeX preamble.
|
||||
#
|
||||
# 'preamble': '',
|
||||
|
||||
# Latex figure (float) alignment
|
||||
#
|
||||
# 'figure_align': 'htbp',
|
||||
}
|
||||
|
||||
# Grouping the document tree into LaTeX files. List of tuples
|
||||
# (source start file, target name, title,
|
||||
# author, documentclass [howto, manual, or own class]).
|
||||
latex_documents = [
|
||||
(master_doc, 'Flask-Executor.tex', 'Flask-Executor Documentation',
|
||||
'Dave Chevell', 'manual'),
|
||||
]
|
||||
|
||||
|
||||
# -- Options for manual page output ------------------------------------------
|
||||
|
||||
# One entry per manual page. List of tuples
|
||||
# (source start file, name, description, authors, manual section).
|
||||
man_pages = [
|
||||
(master_doc, 'flask-executor', 'Flask-Executor Documentation',
|
||||
[author], 1)
|
||||
]
|
||||
|
||||
|
||||
# -- Options for Texinfo output ----------------------------------------------
|
||||
|
||||
# Grouping the document tree into Texinfo files. List of tuples
|
||||
# (source start file, target name, title, author,
|
||||
# dir menu entry, description, category)
|
||||
texinfo_documents = [
|
||||
(master_doc, 'Flask-Executor', 'Flask-Executor Documentation',
|
||||
author, 'Flask-Executor', 'One line description of project.',
|
||||
'Miscellaneous'),
|
||||
]
|
||||
|
||||
|
||||
# -- Extension configuration -------------------------------------------------
|
||||
|
||||
# -- Options for intersphinx extension ---------------------------------------
|
||||
|
||||
# Example configuration for intersphinx: refer to the Python standard library.
|
||||
intersphinx_mapping = {
|
||||
'python': ('https://docs.python.org/3', None),
|
||||
'http://flask.pocoo.org/docs/': None,
|
||||
}
|
|
@ -0,0 +1,187 @@
|
|||
.. Flask-Executor documentation master file, created by
|
||||
sphinx-quickstart on Sun Sep 23 18:52:39 2018.
|
||||
You can adapt this file completely to your liking, but it should at least
|
||||
contain the root `toctree` directive.
|
||||
|
||||
Flask-Executor
|
||||
==============
|
||||
|
||||
.. module:: flask_executor
|
||||
|
||||
Flask-Executor is a `Flask`_ extension that makes it easy to work with :py:mod:`concurrent.futures`
|
||||
in your application.
|
||||
|
||||
Installation
|
||||
------------
|
||||
|
||||
Flask-Executor is available on PyPI and can be installed with pip::
|
||||
|
||||
$ pip install flask-executor
|
||||
|
||||
Setup
|
||||
------
|
||||
|
||||
The Executor extension can either be initialized directly::
|
||||
|
||||
from flask import Flask
|
||||
from flask_executor import Executor
|
||||
|
||||
app = Flask(__name__)
|
||||
executor = Executor(app)
|
||||
|
||||
Or through the factory method::
|
||||
|
||||
executor = Executor()
|
||||
executor.init_app(app)
|
||||
|
||||
|
||||
Configuration
|
||||
-------------
|
||||
|
||||
To specify the type of executor to initialise, set ``EXECUTOR_TYPE`` inside your app configuration.
|
||||
Valid values are ``'thread'`` (default) to initialise a
|
||||
:class:`~concurrent.futures.ThreadPoolExecutor`, or ``'process'`` to initialise a
|
||||
:class:`~concurrent.futures.ProcessPoolExecutor`::
|
||||
|
||||
app.config['EXECUTOR_TYPE'] = 'thread'
|
||||
|
||||
To define the number of worker threads for a :class:`~concurrent.futures.ThreadPoolExecutor` or the
|
||||
number of worker processes for a :class:`~concurrent.futures.ProcessPoolExecutor`, set
|
||||
``EXECUTOR_MAX_WORKERS`` in your app configuration. Valid values are any integer or ``None`` (default)
|
||||
to let :py:mod:`concurrent.futures` pick defaults for you::
|
||||
|
||||
app.config['EXECUTOR_MAX_WORKERS'] = 5
|
||||
|
||||
If multiple executors are needed, :class:`flask_executor.Executor` can be initialised with a ``name``
|
||||
parameter. Named executors will look for configuration variables prefixed with the specified ``name``
|
||||
value, uppercased:
|
||||
|
||||
app.config['CUSTOM_EXECUTOR_TYPE'] = 'thread'
|
||||
app.config['CUSTOM_EXECUTOR_MAX_WORKERS'] = 5
|
||||
executor = Executor(app, name='custom')
|
||||
|
||||
|
||||
Basic Usage
|
||||
-----------
|
||||
|
||||
Flask-Executor supports the standard :class:`concurrent.futures.Executor` methods,
|
||||
:meth:`~concurrent.futures.Executor.submit` and :meth:`~concurrent.futures.Executor.map`::
|
||||
|
||||
def fib(n):
|
||||
if n <= 2:
|
||||
return 1
|
||||
else:
|
||||
return fib(n-1) + fib(n-2)
|
||||
|
||||
@app.route('/run_fib')
|
||||
def run_fib():
|
||||
executor.submit(fib, 5)
|
||||
executor.map(fib, range(1, 6))
|
||||
return 'OK'
|
||||
|
||||
Submitting a task via :meth:`~concurrent.futures.Executor.submit` returns a
|
||||
:class:`flask_executor.FutureProxy` object, a subclass of
|
||||
:class:`concurrent.futures.Future` object from which you can retrieve your job status or result.
|
||||
|
||||
|
||||
Contexts
|
||||
--------
|
||||
|
||||
When calling :meth:`~concurrent.futures.Executor.submit` or :meth:`~concurrent.futures.Executor.map`
|
||||
Flask-Executor will wrap `ThreadPoolExecutor` callables with a copy of both the current application
|
||||
context and current request context. Code that must be run in these contexts or that depends on
|
||||
information or configuration stored in :data:`flask.current_app`, :data:`flask.request` or
|
||||
:data:`flask.g` can be submitted to the executor without modification.
|
||||
|
||||
Note: due to limitations in Python's default object serialisation and a lack of shared memory space between subprocesses, contexts cannot be pushed to `ProcessPoolExecutor()` workers.
|
||||
|
||||
|
||||
Futures
|
||||
-------
|
||||
|
||||
:class:`flask_executor.FutureProxy` objects look and behave like normal :class:`concurrent.futures.Future`
|
||||
objects, but allow `flask_executor` to override certain methods and add additional behaviours.
|
||||
When submitting a callable to :meth:`~concurrent.futures.Future.add_done_callback`, callables are
|
||||
wrapped with a copy of both the current application context and current request context.
|
||||
|
||||
You may want to preserve access to Futures returned from the executor, so that you can retrieve the
|
||||
results in a different part of your application. Flask-Executor allows Futures to be stored within
|
||||
the executor itself and provides methods for querying and returning them in different parts of your
|
||||
app::
|
||||
|
||||
@app.route('/start-task')
|
||||
def start_task():
|
||||
executor.submit_stored('calc_power', pow, 323, 1235)
|
||||
return jsonify({'result':'success'})
|
||||
|
||||
@app.route('/get-result')
|
||||
def get_result():
|
||||
if not executor.futures.done('calc_power'):
|
||||
return jsonify({'status': executor.futures._state('calc_power')})
|
||||
future = executor.futures.pop('calc_power')
|
||||
return jsonify({'status': done, 'result': future.result()})
|
||||
|
||||
|
||||
Decoration
|
||||
----------
|
||||
|
||||
Flask-Executor lets you decorate methods in the same style as distributed task queues when using 'thread' executor type like
|
||||
`Celery`_::
|
||||
|
||||
@executor.job
|
||||
def fib(n):
|
||||
if n <= 2:
|
||||
return 1
|
||||
else:
|
||||
return fib(n-1) + fib(n-2)
|
||||
|
||||
@app.route('/decorate_fib')
|
||||
def decorate_fib():
|
||||
fib.submit(5)
|
||||
fib.submit_stored('fibonacci', 5)
|
||||
fib.map(range(1, 6))
|
||||
return 'OK'
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:caption: Contents:
|
||||
|
||||
api/modules
|
||||
|
||||
|
||||
Default Callbacks
|
||||
-----------------
|
||||
|
||||
:class:`concurrent.futures.Future` objects can have callbacks attached by using
|
||||
:meth:`~concurrent.futures.Future.add_done_callback`. Flask-Executor lets you specify default
|
||||
callbacks that will be applied to all new futures created by the executor::
|
||||
|
||||
def some_callback(future):
|
||||
# do something with future
|
||||
|
||||
executor.add_default_done_callback(some_callback)
|
||||
|
||||
# Callback will be added to the below task automatically
|
||||
executor.submit(pow, 323, 1235)
|
||||
|
||||
|
||||
Propagate Exceptions
|
||||
--------------------
|
||||
|
||||
Normally any exceptions thrown by background threads or processes will be swallowed unless explicitly
|
||||
checked for. To instead surface all exceptions thrown by background tasks, Flask-Executor can add
|
||||
a special default callback that raises any exceptions thrown by tasks submitted to the executor::
|
||||
|
||||
app.config['EXECUTOR_PROPAGATE_EXCEPTIONS'] = True
|
||||
|
||||
|
||||
Indices and tables
|
||||
==================
|
||||
|
||||
* :ref:`genindex`
|
||||
* :ref:`modindex`
|
||||
* :ref:`search`
|
||||
|
||||
.. _Flask: http://flask.pocoo.org/
|
||||
.. _Celery: http://www.celeryproject.org/
|
|
@ -0,0 +1,5 @@
|
|||
from flask_executor.executor import Executor
|
||||
|
||||
|
||||
__all__ = ('Executor',)
|
||||
__version__ = '0.10.0'
|
|
@ -0,0 +1,273 @@
|
|||
import concurrent.futures
|
||||
import contextvars
|
||||
import copy
|
||||
import re
|
||||
|
||||
from flask import copy_current_request_context, current_app, g
|
||||
|
||||
from flask_executor.futures import FutureCollection, FutureProxy
|
||||
from flask_executor.helpers import InstanceProxy, str2bool
|
||||
|
||||
|
||||
def get_current_app_context():
|
||||
try:
|
||||
from flask.globals import _cv_app
|
||||
return _cv_app.get(None)
|
||||
except ImportError:
|
||||
from flask.globals import _app_ctx_stack
|
||||
return _app_ctx_stack.top
|
||||
|
||||
|
||||
def push_app_context(fn):
|
||||
app = current_app._get_current_object()
|
||||
_g = copy.copy(g)
|
||||
|
||||
def wrapper(*args, **kwargs):
|
||||
with app.app_context():
|
||||
ctx = get_current_app_context()
|
||||
ctx.g = _g
|
||||
return fn(*args, **kwargs)
|
||||
|
||||
return wrapper
|
||||
|
||||
|
||||
def propagate_exceptions_callback(future):
|
||||
exc = future.exception()
|
||||
if exc:
|
||||
raise exc
|
||||
|
||||
|
||||
class ExecutorJob:
|
||||
"""Wraps a function with an executor so to allow the wrapped function to
|
||||
submit itself directly to the executor."""
|
||||
|
||||
def __init__(self, executor, fn):
|
||||
self.executor = executor
|
||||
self.fn = fn
|
||||
|
||||
def submit(self, *args, **kwargs):
|
||||
future = self.executor.submit(self.fn, *args, **kwargs)
|
||||
return future
|
||||
|
||||
def submit_stored(self, future_key, *args, **kwargs):
|
||||
future = self.executor.submit_stored(future_key, self.fn, *args, **kwargs)
|
||||
return future
|
||||
|
||||
def map(self, *iterables, **kwargs):
|
||||
results = self.executor.map(self.fn, *iterables, **kwargs)
|
||||
return results
|
||||
|
||||
|
||||
class Executor(InstanceProxy, concurrent.futures._base.Executor):
|
||||
"""An executor interface for :py:mod:`concurrent.futures` designed for
|
||||
working with Flask applications.
|
||||
|
||||
:param app: A Flask application instance.
|
||||
:param name: An optional name for the executor. This can be used to
|
||||
configure multiple executors. Named executors will look for
|
||||
environment variables prefixed with the name in uppercase,
|
||||
e.g. ``CUSTOM_EXECUTOR_TYPE``.
|
||||
"""
|
||||
|
||||
def __init__(self, app=None, name=''):
|
||||
self.app = app
|
||||
self._default_done_callbacks = []
|
||||
self.futures = FutureCollection()
|
||||
if re.match(r'^(\w+)?$', name) is None:
|
||||
raise ValueError(
|
||||
"Executor names may only contain letters, numbers or underscores"
|
||||
)
|
||||
self.name = name
|
||||
prefix = name.upper() + '_' if name else ''
|
||||
self.EXECUTOR_TYPE = prefix + 'EXECUTOR_TYPE'
|
||||
self.EXECUTOR_MAX_WORKERS = prefix + 'EXECUTOR_MAX_WORKERS'
|
||||
self.EXECUTOR_FUTURES_MAX_LENGTH = prefix + 'EXECUTOR_FUTURES_MAX_LENGTH'
|
||||
self.EXECUTOR_PROPAGATE_EXCEPTIONS = prefix + 'EXECUTOR_PROPAGATE_EXCEPTIONS'
|
||||
self.EXECUTOR_PUSH_APP_CONTEXT = prefix + 'EXECUTOR_PUSH_APP_CONTEXT'
|
||||
|
||||
if app is not None:
|
||||
self.init_app(app)
|
||||
|
||||
def init_app(self, app):
|
||||
"""Initialise application. This will also intialise the configured
|
||||
executor type:
|
||||
|
||||
* :class:`concurrent.futures.ThreadPoolExecutor`
|
||||
* :class:`concurrent.futures.ProcessPoolExecutor`
|
||||
"""
|
||||
app.config.setdefault(self.EXECUTOR_TYPE, 'thread')
|
||||
app.config.setdefault(self.EXECUTOR_PUSH_APP_CONTEXT, True)
|
||||
futures_max_length = app.config.setdefault(self.EXECUTOR_FUTURES_MAX_LENGTH, None)
|
||||
propagate_exceptions = app.config.setdefault(self.EXECUTOR_PROPAGATE_EXCEPTIONS, False)
|
||||
if futures_max_length is not None:
|
||||
self.futures.max_length = int(futures_max_length)
|
||||
if str2bool(propagate_exceptions):
|
||||
self.add_default_done_callback(propagate_exceptions_callback)
|
||||
self._self = self._make_executor(app)
|
||||
app.extensions[self.name + 'executor'] = self
|
||||
|
||||
def _make_executor(self, app):
|
||||
executor_max_workers = app.config.setdefault(self.EXECUTOR_MAX_WORKERS, None)
|
||||
if executor_max_workers is not None:
|
||||
executor_max_workers = int(executor_max_workers)
|
||||
executor_type = app.config[self.EXECUTOR_TYPE]
|
||||
if executor_type == 'thread':
|
||||
_executor = concurrent.futures.ThreadPoolExecutor
|
||||
elif executor_type == 'process':
|
||||
_executor = concurrent.futures.ProcessPoolExecutor
|
||||
else:
|
||||
raise ValueError("{} is not a valid executor type.".format(executor_type))
|
||||
return _executor(max_workers=executor_max_workers)
|
||||
|
||||
def _prepare_fn(self, fn, force_copy=False):
|
||||
if isinstance(self._self, concurrent.futures.ThreadPoolExecutor) \
|
||||
or force_copy:
|
||||
fn = copy_current_request_context(fn)
|
||||
if current_app.config[self.EXECUTOR_PUSH_APP_CONTEXT]:
|
||||
fn = push_app_context(fn)
|
||||
return fn
|
||||
|
||||
def submit(self, fn, *args, **kwargs):
|
||||
r"""Schedules the callable, fn, to be executed as fn(\*args \**kwargs)
|
||||
and returns a :class:`~flask_executor.futures.FutureProxy` object, a
|
||||
:class:`~concurrent.futures.Future` subclass representing
|
||||
the execution of the callable.
|
||||
|
||||
See also :meth:`concurrent.futures.Executor.submit`.
|
||||
|
||||
Callables are wrapped a copy of the current application context and the
|
||||
current request context. Code that depends on information or
|
||||
configuration stored in :data:`flask.current_app`,
|
||||
:data:`flask.request` or :data:`flask.g` can be run without
|
||||
modification.
|
||||
|
||||
Note: Because callables only have access to *copies* of the application
|
||||
or request contexts any changes made to these copies will not be
|
||||
reflected in the original view. Further, changes in the original app or
|
||||
request context that occur after the callable is submitted will not be
|
||||
available to the callable.
|
||||
|
||||
Example::
|
||||
|
||||
future = executor.submit(pow, 323, 1235)
|
||||
print(future.result())
|
||||
|
||||
:param fn: The callable to be executed.
|
||||
:param \*args: A list of positional parameters used with
|
||||
the callable.
|
||||
:param \**kwargs: A dict of named parameters used with
|
||||
the callable.
|
||||
|
||||
:rtype: flask_executor.FutureProxy
|
||||
"""
|
||||
fn = self._prepare_fn(fn)
|
||||
future = self._self.submit(fn, *args, **kwargs)
|
||||
for callback in self._default_done_callbacks:
|
||||
future.add_done_callback(callback)
|
||||
return FutureProxy(future, self)
|
||||
|
||||
def submit_stored(self, future_key, fn, *args, **kwargs):
|
||||
r"""Submits the callable using :meth:`Executor.submit` and stores the
|
||||
Future in the executor via a
|
||||
:class:`~flask_executor.futures.FutureCollection` object available at
|
||||
:data:`Executor.futures`. These futures can be retrieved anywhere
|
||||
inside your application and queried for status or popped from the
|
||||
collection. Due to memory concerns, the maximum length of the
|
||||
FutureCollection is limited, and the oldest Futures will be dropped
|
||||
when the limit is exceeded.
|
||||
|
||||
See :class:`flask_executor.futures.FutureCollection` for more
|
||||
information on how to query futures in a collection.
|
||||
|
||||
Example::
|
||||
|
||||
@app.route('/start-task')
|
||||
def start_task():
|
||||
executor.submit_stored('calc_power', pow, 323, 1235)
|
||||
return jsonify({'result':'success'})
|
||||
|
||||
@app.route('/get-result')
|
||||
def get_result():
|
||||
if not executor.futures.done('calc_power'):
|
||||
future_status = executor.futures._state('calc_power')
|
||||
return jsonify({'status': future_status})
|
||||
future = executor.futures.pop('calc_power')
|
||||
return jsonify({'status': done, 'result': future.result()})
|
||||
|
||||
:param future_key: Stores the Future for the submitted task inside the
|
||||
executor's ``futures`` object with the specified
|
||||
key.
|
||||
:param fn: The callable to be executed.
|
||||
:param \*args: A list of positional parameters used with
|
||||
the callable.
|
||||
:param \**kwargs: A dict of named parameters used with
|
||||
the callable.
|
||||
|
||||
:rtype: concurrent.futures.Future
|
||||
"""
|
||||
future = self.submit(fn, *args, **kwargs)
|
||||
self.futures.add(future_key, future)
|
||||
return future
|
||||
|
||||
def map(self, fn, *iterables, **kwargs):
|
||||
r"""Submits the callable, fn, and an iterable of arguments to the
|
||||
executor and returns the results inside a generator.
|
||||
|
||||
See also :meth:`concurrent.futures.Executor.map`.
|
||||
|
||||
Callables are wrapped a copy of the current application context and the
|
||||
current request context. Code that depends on information or
|
||||
configuration stored in :data:`flask.current_app`,
|
||||
:data:`flask.request` or :data:`flask.g` can be run without
|
||||
modification.
|
||||
|
||||
Note: Because callables only have access to *copies* of the application
|
||||
or request contexts
|
||||
any changes made to these copies will not be reflected in the original
|
||||
view. Further, changes in the original app or request context that
|
||||
occur after the callable is submitted will not be available to the
|
||||
callable.
|
||||
|
||||
:param fn: The callable to be executed.
|
||||
:param \*iterables: An iterable of arguments the callable will apply to.
|
||||
:param \**kwargs: A dict of named parameters to pass to the underlying
|
||||
executor's :meth:`~concurrent.futures.Executor.map`
|
||||
method.
|
||||
"""
|
||||
fn = self._prepare_fn(fn)
|
||||
return self._self.map(fn, *iterables, **kwargs)
|
||||
|
||||
def job(self, fn):
|
||||
"""Decorator. Use this to transform functions into `ExecutorJob`
|
||||
instances that can submit themselves directly to the executor.
|
||||
|
||||
Example::
|
||||
|
||||
@executor.job
|
||||
def fib(n):
|
||||
if n <= 2:
|
||||
return 1
|
||||
else:
|
||||
return fib(n-1) + fib(n-2)
|
||||
|
||||
future = fib.submit(5)
|
||||
results = fib.map(range(1, 6))
|
||||
"""
|
||||
if isinstance(self._self, concurrent.futures.ProcessPoolExecutor):
|
||||
raise TypeError(
|
||||
"Can't decorate {}: Executors that use multiprocessing "
|
||||
"don't support decorators".format(fn)
|
||||
)
|
||||
return ExecutorJob(executor=self, fn=fn)
|
||||
|
||||
def add_default_done_callback(self, fn):
|
||||
"""Registers callable to be attached to all newly created futures. When a
|
||||
callable is submitted to the executor,
|
||||
:meth:`concurrent.futures.Future.add_done_callback` is called for every default
|
||||
callable that has been set."
|
||||
|
||||
:param fn: The callable to be added to the list of default done callbacks for new
|
||||
Futures.
|
||||
"""
|
||||
|
||||
self._default_done_callbacks.append(fn)
|
|
@ -0,0 +1,107 @@
|
|||
from collections import OrderedDict
|
||||
from concurrent.futures import Future
|
||||
|
||||
from flask_executor.helpers import InstanceProxy
|
||||
|
||||
|
||||
class FutureCollection:
|
||||
"""A FutureCollection is an object to store and interact with
|
||||
:class:`concurrent.futures.Future` objects. It provides access to all
|
||||
attributes and methods of a Future by proxying attribute calls to the
|
||||
stored Future object.
|
||||
|
||||
To access the methods of a Future from a FutureCollection instance, include
|
||||
a valid ``future_key`` value as the first argument of the method call. To
|
||||
access attributes, call them as though they were a method with
|
||||
``future_key`` as the sole argument. If ``future_key`` does not exist, the
|
||||
call will always return None. If ``future_key`` does exist but the
|
||||
referenced Future does not contain the requested attribute an
|
||||
:exc:`AttributeError` will be raised.
|
||||
|
||||
To prevent memory exhaustion a FutureCollection instance can be bounded by
|
||||
number of items using the ``max_length`` parameter. As a best practice,
|
||||
Futures should be popped once they are ready for use, with the proxied
|
||||
attribute form used to determine whether a Future is ready to be used or
|
||||
discarded.
|
||||
|
||||
:param max_length: Maximum number of Futures to store. Oldest Futures are
|
||||
discarded first.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, max_length=50):
|
||||
self.max_length = max_length
|
||||
self._futures = OrderedDict()
|
||||
|
||||
def __contains__(self, future):
|
||||
return future in self._futures.values()
|
||||
|
||||
def __len__(self):
|
||||
return len(self._futures)
|
||||
|
||||
def __getattr__(self, attr):
|
||||
# Call any valid Future method or attribute
|
||||
def _future_attr(future_key, *args, **kwargs):
|
||||
if future_key not in self._futures:
|
||||
return None
|
||||
future_attr = getattr(self._futures[future_key], attr)
|
||||
if callable(future_attr):
|
||||
return future_attr(*args, **kwargs)
|
||||
return future_attr
|
||||
|
||||
return _future_attr
|
||||
|
||||
def _check_limits(self):
|
||||
if self.max_length is not None:
|
||||
while len(self._futures) > self.max_length:
|
||||
self._futures.popitem(last=False)
|
||||
|
||||
def add(self, future_key, future):
|
||||
"""Add a new Future. If ``max_length`` limit was defined for the
|
||||
FutureCollection, old Futures may be dropped to respect this limit.
|
||||
|
||||
:param future_key: Key for the Future to be added.
|
||||
:param future: Future to be added.
|
||||
"""
|
||||
if future_key in self._futures:
|
||||
raise ValueError("future_key {} already exists".format(future_key))
|
||||
self._futures[future_key] = future
|
||||
self._check_limits()
|
||||
|
||||
def pop(self, future_key):
|
||||
"""Return a Future and remove it from the collection. Futures that are
|
||||
ready to be used should always be popped so they do not continue to
|
||||
consume memory.
|
||||
|
||||
Returns ``None`` if the key doesn't exist.
|
||||
|
||||
:param future_key: Key for the Future to be returned.
|
||||
"""
|
||||
return self._futures.pop(future_key, None)
|
||||
|
||||
|
||||
class FutureProxy(InstanceProxy, Future):
|
||||
"""A FutureProxy is an instance proxy that wraps an instance of
|
||||
:class:`concurrent.futures.Future`. Since an executor can't be made to
|
||||
return a subclassed Future object, this proxy class is used to override
|
||||
instance behaviours whilst providing an agnostic method of accessing
|
||||
the original methods and attributes.
|
||||
:param future: An instance of :class:`~concurrent.futures.Future` that
|
||||
the proxy will provide access to.
|
||||
:param executor: An instance of :class:`flask_executor.Executor` which
|
||||
will be used to provide access to Flask context features.
|
||||
"""
|
||||
|
||||
def __init__(self, future, executor):
|
||||
self._self = future
|
||||
self._executor = executor
|
||||
|
||||
def add_done_callback(self, fn):
|
||||
fn = self._executor._prepare_fn(fn, force_copy=True)
|
||||
return self._self.add_done_callback(fn)
|
||||
|
||||
def __eq__(self, obj):
|
||||
return self._self == obj
|
||||
|
||||
def __hash__(self):
|
||||
return self._self.__hash__()
|
|
@ -0,0 +1,37 @@
|
|||
PROXIED_OBJECT = '__proxied_object'
|
||||
|
||||
|
||||
def str2bool(v):
|
||||
return str(v).lower() in ("yes", "true", "t", "1")
|
||||
|
||||
|
||||
class InstanceProxy(object):
|
||||
|
||||
def __init__(self, proxied_obj):
|
||||
self._self = proxied_obj
|
||||
|
||||
@property
|
||||
def _self(self):
|
||||
try:
|
||||
return object.__getattribute__(self, PROXIED_OBJECT)
|
||||
except AttributeError:
|
||||
return None
|
||||
|
||||
@_self.setter
|
||||
def _self(self, proxied_obj):
|
||||
object.__setattr__(self, PROXIED_OBJECT, proxied_obj)
|
||||
return self
|
||||
|
||||
def __getattribute__(self, attr):
|
||||
super_cls_dict = InstanceProxy.__dict__
|
||||
cls_dict = object.__getattribute__(self, '__class__').__dict__
|
||||
inst_dict = object.__getattribute__(self, '__dict__')
|
||||
if attr in cls_dict or attr in inst_dict or attr in super_cls_dict:
|
||||
return object.__getattribute__(self, attr)
|
||||
target_obj = object.__getattribute__(self, PROXIED_OBJECT)
|
||||
return object.__getattribute__(target_obj, attr)
|
||||
|
||||
def __repr__(self):
|
||||
class_name = object.__getattribute__(self, '__class__').__name__
|
||||
target_repr = repr(self._self)
|
||||
return '<%s( %s )>' % (class_name, target_repr)
|
|
@ -0,0 +1,52 @@
|
|||
import setuptools
|
||||
from setuptools.command.test import test
|
||||
import sys
|
||||
|
||||
try:
|
||||
from flask_executor import __version__ as version
|
||||
except ImportError:
|
||||
import re
|
||||
pattern = re.compile(r"__version__ = '(.*)'")
|
||||
with open('flask_executor/__init__.py') as f:
|
||||
version = pattern.search(f.read()).group(1)
|
||||
|
||||
|
||||
with open('README.md', 'r') as fh:
|
||||
long_description = fh.read()
|
||||
|
||||
|
||||
class pytest(test):
|
||||
|
||||
def run_tests(self):
|
||||
import pytest
|
||||
errno = pytest.main(self.test_args)
|
||||
sys.exit(errno)
|
||||
|
||||
|
||||
setuptools.setup(
|
||||
name='Flask-Executor',
|
||||
version=version,
|
||||
author='Dave Chevell',
|
||||
author_email='chevell@gmail.com',
|
||||
description='An easy to use Flask wrapper for concurrent.futures',
|
||||
long_description=long_description,
|
||||
long_description_content_type='text/markdown',
|
||||
url='https://github.com/dchevell/flask-executor',
|
||||
packages=setuptools.find_packages(exclude=['tests']),
|
||||
keywords=['flask', 'concurrent.futures'],
|
||||
classifiers=[
|
||||
"Programming Language :: Python :: 3",
|
||||
"License :: OSI Approved :: MIT License",
|
||||
"Operating System :: OS Independent",
|
||||
],
|
||||
license='MIT',
|
||||
install_requires=['Flask'],
|
||||
extras_require={
|
||||
':python_version == "2.7"': ['futures>=3.1.1'],
|
||||
'test': ['pytest', 'pytest-cov', 'codecov', 'flask-sqlalchemy'],
|
||||
},
|
||||
test_suite='tests',
|
||||
cmdclass={
|
||||
'test': pytest
|
||||
}
|
||||
)
|
|
@ -0,0 +1,18 @@
|
|||
from flask import Flask
|
||||
import pytest
|
||||
|
||||
from flask_executor import Executor
|
||||
|
||||
|
||||
@pytest.fixture(params=['thread_push_app_context', 'thread_copy_app_context', 'process'])
|
||||
def app(request):
|
||||
app = Flask(__name__)
|
||||
app.config['EXECUTOR_TYPE'] = 'process' if request.param == 'process' else 'thread'
|
||||
app.config['EXECUTOR_PUSH_APP_CONTEXT'] = request.param == 'thread_push_app_context'
|
||||
|
||||
return app
|
||||
|
||||
@pytest.fixture
|
||||
def default_app():
|
||||
app = Flask(__name__)
|
||||
return app
|
|
@ -0,0 +1,376 @@
|
|||
import concurrent
|
||||
import concurrent.futures
|
||||
import logging
|
||||
import random
|
||||
import time
|
||||
from threading import local
|
||||
|
||||
import pytest
|
||||
from flask import current_app, g, request
|
||||
|
||||
from flask_executor import Executor
|
||||
from flask_executor.executor import propagate_exceptions_callback
|
||||
|
||||
|
||||
# Reusable functions for tests
|
||||
|
||||
def fib(n):
|
||||
if n <= 2:
|
||||
return 1
|
||||
else:
|
||||
return fib(n - 1) + fib(n - 2)
|
||||
|
||||
|
||||
def app_context_test_value(_=None):
|
||||
return current_app.config['TEST_VALUE']
|
||||
|
||||
|
||||
def request_context_test_value(_=None):
|
||||
return request.test_value
|
||||
|
||||
|
||||
def g_context_test_value(_=None):
|
||||
return g.test_value
|
||||
|
||||
|
||||
def fail():
|
||||
time.sleep(0.1)
|
||||
print(hello)
|
||||
|
||||
|
||||
def test_init(app):
|
||||
executor = Executor(app)
|
||||
assert 'executor' in app.extensions
|
||||
assert isinstance(executor, concurrent.futures._base.Executor)
|
||||
assert isinstance(executor._self, concurrent.futures._base.Executor)
|
||||
assert getattr(executor, 'shutdown')
|
||||
|
||||
|
||||
def test_factory_init(app):
|
||||
executor = Executor()
|
||||
executor.init_app(app)
|
||||
assert 'executor' in app.extensions
|
||||
assert isinstance(executor._self, concurrent.futures._base.Executor)
|
||||
|
||||
|
||||
def test_thread_executor_init(default_app):
|
||||
default_app.config['EXECUTOR_TYPE'] = 'thread'
|
||||
executor = Executor(default_app)
|
||||
assert isinstance(executor._self, concurrent.futures.ThreadPoolExecutor)
|
||||
assert isinstance(executor, concurrent.futures.ThreadPoolExecutor)
|
||||
|
||||
|
||||
def test_process_executor_init(default_app):
|
||||
default_app.config['EXECUTOR_TYPE'] = 'process'
|
||||
executor = Executor(default_app)
|
||||
assert isinstance(executor._self, concurrent.futures.ProcessPoolExecutor)
|
||||
assert isinstance(executor, concurrent.futures.ProcessPoolExecutor)
|
||||
|
||||
|
||||
def test_default_executor_init(default_app):
|
||||
executor = Executor(default_app)
|
||||
assert isinstance(executor._self, concurrent.futures.ThreadPoolExecutor)
|
||||
|
||||
|
||||
def test_invalid_executor_init(default_app):
|
||||
default_app.config['EXECUTOR_TYPE'] = 'invalid_value'
|
||||
try:
|
||||
executor = Executor(default_app)
|
||||
except ValueError:
|
||||
assert True
|
||||
else:
|
||||
assert False
|
||||
|
||||
|
||||
def test_submit(app):
|
||||
executor = Executor(app)
|
||||
with app.test_request_context(''):
|
||||
future = executor.submit(fib, 5)
|
||||
assert future.result() == fib(5)
|
||||
|
||||
|
||||
def test_max_workers(app):
|
||||
EXECUTOR_MAX_WORKERS = 10
|
||||
app.config['EXECUTOR_MAX_WORKERS'] = EXECUTOR_MAX_WORKERS
|
||||
executor = Executor(app)
|
||||
assert executor._max_workers == EXECUTOR_MAX_WORKERS
|
||||
assert executor._self._max_workers == EXECUTOR_MAX_WORKERS
|
||||
|
||||
|
||||
def test_thread_decorator_submit(default_app):
|
||||
default_app.config['EXECUTOR_TYPE'] = 'thread'
|
||||
executor = Executor(default_app)
|
||||
|
||||
@executor.job
|
||||
def decorated(n):
|
||||
return fib(n)
|
||||
|
||||
with default_app.test_request_context(''):
|
||||
future = decorated.submit(5)
|
||||
assert future.result() == fib(5)
|
||||
|
||||
|
||||
def test_thread_decorator_submit_stored(default_app):
|
||||
default_app.config['EXECUTOR_TYPE'] = 'thread'
|
||||
executor = Executor(default_app)
|
||||
|
||||
@executor.job
|
||||
def decorated(n):
|
||||
return fib(n)
|
||||
|
||||
with default_app.test_request_context():
|
||||
future = decorated.submit_stored('fibonacci', 35)
|
||||
assert executor.futures.done('fibonacci') is False
|
||||
assert future in executor.futures
|
||||
executor.futures.pop('fibonacci')
|
||||
assert future not in executor.futures
|
||||
|
||||
|
||||
def test_thread_decorator_map(default_app):
|
||||
iterable = list(range(5))
|
||||
default_app.config['EXECUTOR_TYPE'] = 'thread'
|
||||
executor = Executor(default_app)
|
||||
|
||||
@executor.job
|
||||
def decorated(n):
|
||||
return fib(n)
|
||||
|
||||
with default_app.test_request_context(''):
|
||||
results = decorated.map(iterable)
|
||||
for i, r in zip(iterable, results):
|
||||
assert fib(i) == r
|
||||
|
||||
|
||||
def test_process_decorator(default_app):
|
||||
''' Using decorators should fail with a TypeError when using the ProcessPoolExecutor '''
|
||||
default_app.config['EXECUTOR_TYPE'] = 'process'
|
||||
executor = Executor(default_app)
|
||||
try:
|
||||
@executor.job
|
||||
def decorated(n):
|
||||
return fib(n)
|
||||
except TypeError:
|
||||
pass
|
||||
else:
|
||||
assert 0
|
||||
|
||||
|
||||
def test_submit_app_context(default_app):
|
||||
test_value = random.randint(1, 101)
|
||||
default_app.config['TEST_VALUE'] = test_value
|
||||
executor = Executor(default_app)
|
||||
with default_app.test_request_context(''):
|
||||
future = executor.submit(app_context_test_value)
|
||||
assert future.result() == test_value
|
||||
|
||||
|
||||
def test_submit_g_context_process(default_app):
|
||||
test_value = random.randint(1, 101)
|
||||
executor = Executor(default_app)
|
||||
with default_app.test_request_context(''):
|
||||
g.test_value = test_value
|
||||
future = executor.submit(g_context_test_value)
|
||||
assert future.result() == test_value
|
||||
|
||||
|
||||
def test_submit_request_context(default_app):
|
||||
test_value = random.randint(1, 101)
|
||||
executor = Executor(default_app)
|
||||
with default_app.test_request_context(''):
|
||||
request.test_value = test_value
|
||||
future = executor.submit(request_context_test_value)
|
||||
assert future.result() == test_value
|
||||
|
||||
|
||||
def test_map_app_context(default_app):
|
||||
test_value = random.randint(1, 101)
|
||||
iterator = list(range(5))
|
||||
default_app.config['TEST_VALUE'] = test_value
|
||||
executor = Executor(default_app)
|
||||
with default_app.test_request_context(''):
|
||||
results = executor.map(app_context_test_value, iterator)
|
||||
for r in results:
|
||||
assert r == test_value
|
||||
|
||||
|
||||
def test_map_g_context_process(default_app):
|
||||
test_value = random.randint(1, 101)
|
||||
iterator = list(range(5))
|
||||
executor = Executor(default_app)
|
||||
with default_app.test_request_context(''):
|
||||
g.test_value = test_value
|
||||
results = executor.map(g_context_test_value, iterator)
|
||||
for r in results:
|
||||
assert r == test_value
|
||||
|
||||
|
||||
def test_map_request_context(default_app):
|
||||
test_value = random.randint(1, 101)
|
||||
iterator = list(range(5))
|
||||
executor = Executor(default_app)
|
||||
with default_app.test_request_context('/'):
|
||||
request.test_value = test_value
|
||||
results = executor.map(request_context_test_value, iterator)
|
||||
for r in results:
|
||||
assert r == test_value
|
||||
|
||||
|
||||
def test_executor_stored_future(default_app):
|
||||
executor = Executor(default_app)
|
||||
with default_app.test_request_context():
|
||||
future = executor.submit_stored('fibonacci', fib, 35)
|
||||
assert executor.futures.done('fibonacci') is False
|
||||
assert future in executor.futures
|
||||
executor.futures.pop('fibonacci')
|
||||
assert future not in executor.futures
|
||||
|
||||
|
||||
def test_set_max_futures(default_app):
|
||||
default_app.config['EXECUTOR_FUTURES_MAX_LENGTH'] = 10
|
||||
executor = Executor(default_app)
|
||||
assert executor.futures.max_length == default_app.config['EXECUTOR_FUTURES_MAX_LENGTH']
|
||||
|
||||
|
||||
def test_named_executor(default_app):
|
||||
name = 'custom'
|
||||
EXECUTOR_MAX_WORKERS = 5
|
||||
CUSTOM_EXECUTOR_MAX_WORKERS = 10
|
||||
default_app.config['EXECUTOR_MAX_WORKERS'] = EXECUTOR_MAX_WORKERS
|
||||
default_app.config['CUSTOM_EXECUTOR_MAX_WORKERS'] = CUSTOM_EXECUTOR_MAX_WORKERS
|
||||
executor = Executor(default_app)
|
||||
custom_executor = Executor(default_app, name=name)
|
||||
assert 'executor' in default_app.extensions
|
||||
assert name + 'executor' in default_app.extensions
|
||||
assert executor._self._max_workers == EXECUTOR_MAX_WORKERS
|
||||
assert executor._max_workers == EXECUTOR_MAX_WORKERS
|
||||
assert custom_executor._self._max_workers == CUSTOM_EXECUTOR_MAX_WORKERS
|
||||
assert custom_executor._max_workers == CUSTOM_EXECUTOR_MAX_WORKERS
|
||||
|
||||
|
||||
def test_named_executor_submit(app):
|
||||
name = 'custom'
|
||||
custom_executor = Executor(app, name=name)
|
||||
with app.test_request_context(''):
|
||||
future = custom_executor.submit(fib, 5)
|
||||
assert future.result() == fib(5)
|
||||
|
||||
|
||||
def test_named_executor_name(default_app):
|
||||
name = 'invalid name'
|
||||
try:
|
||||
executor = Executor(default_app, name=name)
|
||||
except ValueError:
|
||||
assert True
|
||||
else:
|
||||
assert False
|
||||
|
||||
|
||||
def test_default_done_callback(app):
|
||||
executor = Executor(app)
|
||||
|
||||
def callback(future):
|
||||
setattr(future, 'test', 'test')
|
||||
|
||||
executor.add_default_done_callback(callback)
|
||||
with app.test_request_context('/'):
|
||||
future = executor.submit(fib, 5)
|
||||
concurrent.futures.wait([future])
|
||||
assert hasattr(future, 'test')
|
||||
|
||||
|
||||
def test_propagate_exception_callback(app, caplog):
|
||||
caplog.set_level(logging.ERROR)
|
||||
app.config['EXECUTOR_PROPAGATE_EXCEPTIONS'] = True
|
||||
executor = Executor(app)
|
||||
with pytest.raises(NameError):
|
||||
with app.test_request_context('/'):
|
||||
future = executor.submit(fail)
|
||||
concurrent.futures.wait([future])
|
||||
future.result()
|
||||
|
||||
|
||||
def test_coerce_config_types(default_app):
|
||||
default_app.config['EXECUTOR_MAX_WORKERS'] = '5'
|
||||
default_app.config['EXECUTOR_FUTURES_MAX_LENGTH'] = '10'
|
||||
default_app.config['EXECUTOR_PROPAGATE_EXCEPTIONS'] = 'true'
|
||||
executor = Executor(default_app)
|
||||
with default_app.test_request_context():
|
||||
future = executor.submit_stored('fibonacci', fib, 35)
|
||||
|
||||
|
||||
def test_shutdown_executor(default_app):
|
||||
executor = Executor(default_app)
|
||||
assert executor._shutdown is False
|
||||
executor.shutdown()
|
||||
assert executor._shutdown is True
|
||||
|
||||
|
||||
def test_pre_init_executor(default_app):
|
||||
executor = Executor()
|
||||
|
||||
@executor.job
|
||||
def decorated(n):
|
||||
return fib(n)
|
||||
|
||||
assert executor
|
||||
executor.init_app(default_app)
|
||||
with default_app.test_request_context(''):
|
||||
future = decorated.submit(5)
|
||||
assert future.result() == fib(5)
|
||||
|
||||
|
||||
thread_local = local()
|
||||
|
||||
|
||||
def set_thread_local():
|
||||
if hasattr(thread_local, 'value'):
|
||||
raise ValueError('thread local already present')
|
||||
thread_local.value = True
|
||||
|
||||
|
||||
def clear_thread_local(response_or_exc):
|
||||
if hasattr(thread_local, 'value'):
|
||||
del thread_local.value
|
||||
return response_or_exc
|
||||
|
||||
|
||||
def test_teardown_appcontext_is_called(default_app):
|
||||
default_app.config['EXECUTOR_MAX_WORKERS'] = 1
|
||||
default_app.config['EXECUTOR_PUSH_APP_CONTEXT'] = True
|
||||
default_app.teardown_appcontext(clear_thread_local)
|
||||
|
||||
executor = Executor(default_app)
|
||||
with default_app.test_request_context():
|
||||
futures = [executor.submit(set_thread_local) for _ in range(2)]
|
||||
concurrent.futures.wait(futures)
|
||||
[propagate_exceptions_callback(future) for future in futures]
|
||||
|
||||
|
||||
try:
|
||||
import flask_sqlalchemy
|
||||
except ImportError:
|
||||
flask_sqlalchemy = None
|
||||
|
||||
|
||||
@pytest.mark.skipif(flask_sqlalchemy is None, reason="flask_sqlalchemy not installed")
|
||||
def test_sqlalchemy(default_app, caplog):
|
||||
default_app.config['SQLALCHEMY_ENGINE_OPTIONS'] = {'echo_pool': 'debug', 'echo': 'debug'}
|
||||
default_app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///:memory:'
|
||||
default_app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
|
||||
default_app.config['EXECUTOR_PUSH_APP_CONTEXT'] = True
|
||||
default_app.config['EXECUTOR_MAX_WORKERS'] = 1
|
||||
db = flask_sqlalchemy.SQLAlchemy(default_app)
|
||||
|
||||
def test_db():
|
||||
return list(db.session.execute('select 1'))
|
||||
|
||||
executor = Executor(default_app)
|
||||
with default_app.test_request_context():
|
||||
for i in range(2):
|
||||
with caplog.at_level('DEBUG'):
|
||||
caplog.clear()
|
||||
future = executor.submit(test_db)
|
||||
concurrent.futures.wait([future])
|
||||
future.result()
|
||||
assert 'checked out from pool' in caplog.text
|
||||
assert 'being returned to pool' in caplog.text
|
|
@ -0,0 +1,97 @@
|
|||
import concurrent.futures
|
||||
import time
|
||||
|
||||
import pytest
|
||||
|
||||
from flask_executor import Executor
|
||||
from flask_executor.futures import FutureCollection, FutureProxy
|
||||
from flask_executor.helpers import InstanceProxy
|
||||
|
||||
|
||||
def fib(n):
|
||||
if n <= 2:
|
||||
return 1
|
||||
else:
|
||||
return fib(n-1) + fib(n-2)
|
||||
|
||||
|
||||
def test_plain_future():
|
||||
executor = concurrent.futures.ThreadPoolExecutor(max_workers=1)
|
||||
futures = FutureCollection()
|
||||
future = executor.submit(fib, 33)
|
||||
futures.add('fibonacci', future)
|
||||
assert futures.done('fibonacci') is False
|
||||
assert futures._state('fibonacci') is not None
|
||||
assert future in futures
|
||||
futures.pop('fibonacci')
|
||||
assert future not in futures
|
||||
|
||||
def test_missing_future():
|
||||
futures = FutureCollection()
|
||||
assert futures.running('test') is None
|
||||
|
||||
def test_duplicate_add_future():
|
||||
executor = concurrent.futures.ThreadPoolExecutor(max_workers=1)
|
||||
futures = FutureCollection()
|
||||
future = executor.submit(fib, 33)
|
||||
futures.add('fibonacci', future)
|
||||
try:
|
||||
futures.add('fibonacci', future)
|
||||
except ValueError:
|
||||
assert True
|
||||
else:
|
||||
assert False
|
||||
|
||||
def test_futures_max_length():
|
||||
executor = concurrent.futures.ThreadPoolExecutor(max_workers=1)
|
||||
futures = FutureCollection(max_length=10)
|
||||
future = executor.submit(pow, 2, 4)
|
||||
futures.add(0, future)
|
||||
assert future in futures
|
||||
assert len(futures) == 1
|
||||
for i in range(1, 11):
|
||||
futures.add(i, executor.submit(pow, 2, 4))
|
||||
assert len(futures) == 10
|
||||
assert future not in futures
|
||||
|
||||
def test_future_proxy(default_app):
|
||||
executor = Executor(default_app)
|
||||
with default_app.test_request_context(''):
|
||||
future = executor.submit(pow, 2, 4)
|
||||
# Test if we're returning a subclass of Future
|
||||
assert isinstance(future, concurrent.futures.Future)
|
||||
assert isinstance(future, FutureProxy)
|
||||
concurrent.futures.wait([future])
|
||||
# test standard Future methods and attributes
|
||||
assert future._state == concurrent.futures._base.FINISHED
|
||||
assert future.done()
|
||||
assert future.exception(timeout=0) is None
|
||||
|
||||
def test_add_done_callback(default_app):
|
||||
"""Exceptions thrown in callbacks can't be easily caught and make it hard
|
||||
to test for callback failure. To combat this, a global variable is used to
|
||||
store the value of an exception and test for its existence.
|
||||
"""
|
||||
executor = Executor(default_app)
|
||||
global exception
|
||||
exception = None
|
||||
with default_app.test_request_context(''):
|
||||
future = executor.submit(time.sleep, 0.5)
|
||||
def callback(future):
|
||||
global exception
|
||||
try:
|
||||
executor.submit(time.sleep, 0)
|
||||
except RuntimeError as e:
|
||||
exception = e
|
||||
future.add_done_callback(callback)
|
||||
concurrent.futures.wait([future])
|
||||
assert exception is None
|
||||
|
||||
def test_instance_proxy():
|
||||
class TestProxy(InstanceProxy):
|
||||
pass
|
||||
x = TestProxy(concurrent.futures.Future())
|
||||
assert isinstance(x, concurrent.futures.Future)
|
||||
assert 'TestProxy' in repr(x)
|
||||
assert 'Future' in repr(x)
|
||||
|
|
@ -0,0 +1,18 @@
|
|||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
git clone https://github.com/python-restx/flask-restx opengnsys-flask-restx
|
||||
cd opengnsys-flask-restx
|
||||
git checkout 1.3.0
|
||||
version=`python3 ./setup.py --version`
|
||||
cd ..
|
||||
|
||||
if [ -d "opengnsys-flask-restx-${version}" ] ; then
|
||||
echo "Directory opengnsys-flask-restx-${version} already exists, won't overwrite"
|
||||
exit 1
|
||||
else
|
||||
rm -rf opengnsys-flask-restx/.git
|
||||
mv opengnsys-flask-restx "opengnsys-flask-restx-${version}"
|
||||
tar -c --xz -v -f "opengnsys-flask-restx_${version}.orig.tar.xz" "opengnsys-flask-restx-${version}"
|
||||
fi
|
||||
|
|
@ -0,0 +1,21 @@
|
|||
# EditorConfig is awesome: https://EditorConfig.org
|
||||
|
||||
# top-most EditorConfig file
|
||||
root = true
|
||||
|
||||
# Unix-style newlines with a newline ending every file
|
||||
[*]
|
||||
end_of_line = lf
|
||||
insert_final_newline = true
|
||||
trim_trailing_whitespace = true
|
||||
|
||||
# Matches multiple files with brace expansion notation
|
||||
# Set default charset
|
||||
[*.{js,py}]
|
||||
charset = utf-8
|
||||
|
||||
# 4 space indentation
|
||||
[*.py]
|
||||
indent_style = space
|
||||
indent_size = 4
|
||||
max_line_length = 120
|
|
@ -0,0 +1,44 @@
|
|||
---
|
||||
name: Bug Report
|
||||
about: Tell us how Flask-RESTX is broken
|
||||
title: ''
|
||||
labels: bug
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
### ***** **BEFORE LOGGING AN ISSUE** *****
|
||||
|
||||
- Is this something you can **debug and fix**? Send a pull request! Bug fixes and documentation fixes are welcome.
|
||||
- Please check if a similar issue already exists or has been closed before. Seriously, nobody here is getting paid. Help us out and take five minutes to make sure you aren't submitting a duplicate.
|
||||
- Please review the [guidelines for contributing](https://github.com/python-restx/flask-restx/blob/master/CONTRIBUTING.rst)
|
||||
|
||||
### **Code**
|
||||
|
||||
```python
|
||||
from your_code import your_buggy_implementation
|
||||
```
|
||||
|
||||
### **Repro Steps** (if applicable)
|
||||
1. ...
|
||||
2. ...
|
||||
3. Broken!
|
||||
|
||||
### **Expected Behavior**
|
||||
A description of what you expected to happen.
|
||||
|
||||
### **Actual Behavior**
|
||||
A description of the unexpected, buggy behavior.
|
||||
|
||||
### **Error Messages/Stack Trace**
|
||||
If applicable, add the stack trace produced by the error
|
||||
|
||||
### **Environment**
|
||||
- Python version
|
||||
- Flask version
|
||||
- Flask-RESTX version
|
||||
- Other installed Flask extensions
|
||||
|
||||
### **Additional Context**
|
||||
|
||||
This is your last chance to provide any pertinent details, don't let this opportunity pass you by!
|
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
name: Feature request
|
||||
about: Suggest an idea for this project
|
||||
title: ''
|
||||
labels: enhancement
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**Is your feature request related to a problem? Please describe.**
|
||||
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
|
||||
|
||||
**Describe the solution you'd like**
|
||||
A clear and concise description of what you want to happen.
|
||||
|
||||
**Describe alternatives you've considered**
|
||||
A clear and concise description of any alternative solutions or features you've considered.
|
||||
|
||||
**Additional context**
|
||||
Add any other context or screenshots about the feature request here.
|
|
@ -0,0 +1,14 @@
|
|||
---
|
||||
name: Question
|
||||
about: Ask a question
|
||||
title: ''
|
||||
labels: question
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**Ask a question**
|
||||
A clear and concise question
|
||||
|
||||
**Additional context**
|
||||
Add any other context or screenshots about the feature request here.
|
|
@ -0,0 +1,25 @@
|
|||
## Proposed changes
|
||||
|
||||
At a high level, describe your reasoning for making these changes. If you are fixing a bug or resolving a feature request, **please include a link to the issue**.
|
||||
|
||||
## Types of changes
|
||||
|
||||
What types of changes does your code introduce?
|
||||
_Put an `x` in the boxes that apply_
|
||||
|
||||
- [ ] Bugfix (non-breaking change which fixes an issue)
|
||||
- [ ] New feature (non-breaking change which adds functionality)
|
||||
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
|
||||
|
||||
## Checklist
|
||||
|
||||
_Put an `x` in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your code._
|
||||
|
||||
- [ ] I have read the [guidelines for contributing](https://github.com/python-restx/flask-restx/blob/master/CONTRIBUTING.rst)
|
||||
- [ ] All unit tests pass on my local version with my changes
|
||||
- [ ] I have added tests that prove my fix is effective or that my feature works
|
||||
- [ ] I have added necessary documentation (if appropriate)
|
||||
|
||||
## Further comments
|
||||
|
||||
If this is a relatively large or complex change, kick off the discussion by explaining why you chose the solution you did and what alternatives you considered, etc...
|
|
@ -0,0 +1,10 @@
|
|||
name: Lint
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: psf/black@stable
|
|
@ -0,0 +1,28 @@
|
|||
name: Release
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- "*"
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Set up Python 3.8
|
||||
uses: actions/setup-python@v1
|
||||
with:
|
||||
python-version: 3.8
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v2
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install ".[dev]" wheel
|
||||
- name: Fetch web assets
|
||||
run: inv assets
|
||||
- name: Publish
|
||||
env:
|
||||
TWINE_USERNAME: "__token__"
|
||||
TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }}
|
||||
run: |
|
||||
python setup.py sdist bdist_wheel
|
||||
twine upload dist/*
|
|
@ -0,0 +1,74 @@
|
|||
name: Tests
|
||||
on:
|
||||
pull_request:
|
||||
branches:
|
||||
- "*"
|
||||
push:
|
||||
branches:
|
||||
- "*"
|
||||
schedule:
|
||||
- cron: "0 1 * * *"
|
||||
workflow_dispatch:
|
||||
jobs:
|
||||
unit-tests:
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
python-version: ["3.8", "3.9", "3.10", "3.11", "pypy3.8", "3.12"]
|
||||
flask: ["<3.0.0", ">=3.0.0"]
|
||||
steps:
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
allow-prereleases: true
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v3
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install "flask${{ matrix.flask }}"
|
||||
pip install ".[test]"
|
||||
- name: Test with inv
|
||||
run: inv cover qa
|
||||
- name: Codecov
|
||||
uses: codecov/codecov-action@v1
|
||||
with:
|
||||
file: ./coverage.xml
|
||||
bench:
|
||||
needs: unit-tests
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event_name == 'pull_request'
|
||||
steps:
|
||||
- name: Set up Python 3.8
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: "3.8"
|
||||
- name: Checkout ${{ github.base_ref }}
|
||||
uses: actions/checkout@v3
|
||||
with:
|
||||
ref: ${{ github.base_ref}}
|
||||
path: base
|
||||
- name: Checkout ${{ github.ref }}
|
||||
uses: actions/checkout@v3
|
||||
with:
|
||||
ref: ${{ github.ref}}
|
||||
path: ref
|
||||
- name: Install dev dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install -e "base[dev]"
|
||||
- name: Install ci dependencies for ${{ github.base_ref }}
|
||||
run: pip install -e "base[ci]"
|
||||
- name: Benchmarks for ${{ github.base_ref }}
|
||||
run: |
|
||||
cd base
|
||||
inv benchmark --max-time 4 --save
|
||||
mv .benchmarks ../ref/
|
||||
- name: Install ci dependencies for ${{ github.ref }}
|
||||
run: pip install -e "ref[ci]"
|
||||
- name: Benchmarks for ${{ github.ref }}
|
||||
run: |
|
||||
cd ref
|
||||
inv benchmark --max-time 4 --compare
|
|
@ -0,0 +1,70 @@
|
|||
# Byte-compiled / optimized / DLL files
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
|
||||
# C extensions
|
||||
*.so
|
||||
|
||||
# Distribution / packaging
|
||||
.Python
|
||||
env/
|
||||
bin/
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
eggs/
|
||||
lib/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
cover
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
|
||||
# Installer logs
|
||||
pip-log.txt
|
||||
pip-delete-this-directory.txt
|
||||
|
||||
# Unit test / coverage reports
|
||||
htmlcov/
|
||||
.tox/
|
||||
.coverage
|
||||
.cache
|
||||
nosetests.xml
|
||||
coverage.xml
|
||||
prof/
|
||||
histograms/
|
||||
.benchmarks
|
||||
|
||||
# Translations
|
||||
*.mo
|
||||
|
||||
# Atom
|
||||
*.cson
|
||||
|
||||
# Mr Developer
|
||||
.mr.developer.cfg
|
||||
.project
|
||||
.pydevproject
|
||||
|
||||
# Rope
|
||||
.ropeproject
|
||||
|
||||
# Django stuff:
|
||||
*.log
|
||||
*.pot
|
||||
|
||||
# Sphinx documentation
|
||||
doc/_build/
|
||||
|
||||
# Specifics
|
||||
flask_restx/static
|
||||
node_modules
|
||||
|
||||
# pyenv
|
||||
.python-version
|
||||
|
||||
# Jet Brains
|
||||
.idea
|
|
@ -0,0 +1,63 @@
|
|||
# configure updates globally
|
||||
# default: all
|
||||
# allowed: all, insecure, False
|
||||
# update: all
|
||||
|
||||
# configure dependency pinning globally
|
||||
# default: True
|
||||
# allowed: True, False
|
||||
pin: False
|
||||
|
||||
# set the default branch
|
||||
# default: empty, the default branch on GitHub
|
||||
# branch: dev
|
||||
|
||||
# update schedule
|
||||
# default: empty
|
||||
# allowed: "every day", "every week", ..
|
||||
# schedule: "every day"
|
||||
|
||||
# search for requirement files
|
||||
# default: True
|
||||
# allowed: True, False
|
||||
# search: True
|
||||
|
||||
# Specify requirement files by hand, default is empty
|
||||
# default: empty
|
||||
# allowed: list
|
||||
# requirements:
|
||||
# - requirements/staging.txt:
|
||||
# # update all dependencies and pin them
|
||||
# update: all
|
||||
# pin: True
|
||||
# - requirements/dev.txt:
|
||||
# # don't update dependencies, use global 'pin' default
|
||||
# update: False
|
||||
# - requirements/prod.txt:
|
||||
# # update insecure only, pin all
|
||||
# update: insecure
|
||||
# pin: True
|
||||
|
||||
# add a label to pull requests, default is not set
|
||||
# requires private repo permissions, even on public repos
|
||||
# default: empty
|
||||
label_prs: update
|
||||
|
||||
# assign users to pull requests, default is not set
|
||||
# requires private repo permissions, even on public repos
|
||||
# default: empty
|
||||
# assignees:
|
||||
# - carl
|
||||
# - carlsen
|
||||
|
||||
# configure the branch prefix the bot is using
|
||||
# default: pyup-
|
||||
branch_prefix: pyup/
|
||||
|
||||
# set a global prefix for PRs
|
||||
# default: empty
|
||||
pr_prefix: "[PyUP]"
|
||||
|
||||
# allow to close stale PRs
|
||||
# default: True
|
||||
close_prs: True
|
|
@ -0,0 +1,342 @@
|
|||
Flask-RestX Changelog
|
||||
=====================
|
||||
|
||||
Basic structure is
|
||||
|
||||
::
|
||||
|
||||
ADD LINK (..) _section-VERSION
|
||||
VERSION
|
||||
-------
|
||||
ADD LINK (..) _bug_fixes-VERSION OR _enhancments-VERSION
|
||||
Bug Fixes or Enchancements
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
* Message (TICKET) [CONTRIBUTOR]
|
||||
|
||||
Opening a release
|
||||
-----------------
|
||||
|
||||
If you’re the first contributor, add a new semver release to the
|
||||
document. Place your addition in the correct category, giving a short
|
||||
description (matching something in a git commit), the issue ID (or PR ID
|
||||
if no issue opened), and your Github username for tracking contributors!
|
||||
|
||||
Releases prior to 0.3.0 were “best effort” filled out, but are missing
|
||||
some info. If you see your contribution missing info, please open a PR
|
||||
on the Changelog!
|
||||
|
||||
.. _section-1.3.0:
|
||||
1.3.0
|
||||
-----
|
||||
.. _bug_fixes-1.3.0
|
||||
Bug Fixes
|
||||
~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
* Fixing werkzeug 3 deprecated version import. Import is replaced by new style version check with importlib (#573) [Ryu-CZ]
|
||||
* Fixing flask 3.0+ compatibility of `ModuleNotFoundError: No module named 'flask.scaffold'` Import error. (#567) [Ryu-CZ]
|
||||
* Fix wrong status code and message on responses when handling `HTTPExceptions` (#569) [lkk7]
|
||||
* Add flask 2 and flask 3 to testing matrix. [foarsitter]
|
||||
* Update internally pinned pytest-flask to 1.3.0 for Flask >=3.0.0 support. [peter-doggart]
|
||||
* Python 3.12 support. [foarsitter]
|
||||
* Fix wrong status code and message on responses when handling HTTPExceptions. [ikk7]
|
||||
* Update changelog Flask version table. [peter-doggart]
|
||||
* Remove temporary package version restrictions for flask < 3.0.0, werkzeug and jsonschema (jsonschema future deprecation warning remains. See #553). [peter-doggart]
|
||||
|
||||
.. _section-1.2.0:
|
||||
1.2.0
|
||||
-----
|
||||
.. _bug_fixes-1.2.0
|
||||
Bug Fixes
|
||||
~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
* Fixing test as HTTP Header MIMEAccept expects quality-factor number in form of `X.X` (#547) [chipndell]
|
||||
* Introduce temporary restrictions on some package versions. (`flask<3.0.0`, `werkzeug<3.0.0`, `jsonschema<=4.17.3`) [peter-doggart]
|
||||
|
||||
|
||||
.. _enhancements-1.2.0:
|
||||
|
||||
Enhancements
|
||||
~~~~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
* Drop support for python 3.7
|
||||
|
||||
|
||||
.. _section-1.1.0:
|
||||
1.1.0
|
||||
-----
|
||||
|
||||
.. _bug_fixes-1.1.0
|
||||
Bug Fixes
|
||||
~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
* Update Swagger-UI to latest version to fix several security vulnerabiltiies. [peter-doggart]
|
||||
* Add a warning to the docs that nested Blueprints are not supported. [peter-doggart]
|
||||
* Add a note to the docs that flask-restx always registers the root (/) path. [peter-doggart]
|
||||
|
||||
.. _section-1.0.6:
|
||||
1.0.6
|
||||
-----
|
||||
|
||||
.. _bug_fixes-1.0.6
|
||||
Bug Fixes
|
||||
~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
* Update Black to 2023 version [peter-doggart]
|
||||
* Fix minor bug introduced in 1.0.5 that changed the behaviour of how flask-restx propagates exceptions. (#512) [peter-doggart]
|
||||
* Update PyPi classifer to Production/Stable. [peter-doggart]
|
||||
* Add support for Python 3.11 (requires update to invoke ^2.0.0) [peter-doggart]
|
||||
|
||||
.. _section-1.0.5:
|
||||
1.0.5
|
||||
-----
|
||||
|
||||
.. _bug_fixes-1.0.5
|
||||
Bug Fixes
|
||||
~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
* Fix failing pypy python setup in github actions
|
||||
* Fix compatibility with upcoming release of Flask 2.3+. (#485) [jdieter]
|
||||
|
||||
.. _section-1.0.2:
|
||||
1.0.2
|
||||
-----
|
||||
|
||||
.. _bug_fixes-1.0.2
|
||||
Bug Fixes
|
||||
~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
* Properly remove six dependency
|
||||
|
||||
.. _section-1.0.1:
|
||||
1.0.1
|
||||
-----
|
||||
|
||||
.. _breaking-1.0.1
|
||||
|
||||
Breaking
|
||||
~~~~~~~~
|
||||
|
||||
Starting from this release, we only support python versions >= 3.7
|
||||
|
||||
.. _bug_fixes-1.0.1
|
||||
|
||||
Bug Fixes
|
||||
~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
* Fix compatibility issue with werkzeug 2.1.0 (#423) [stacywsmith]
|
||||
|
||||
.. _enhancements-1.0.1:
|
||||
|
||||
Enhancements
|
||||
~~~~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
* Drop support for python <3.7
|
||||
|
||||
.. _section-0.5.1:
|
||||
0.5.1
|
||||
-----
|
||||
|
||||
.. _bug_fixes-0.5.1
|
||||
|
||||
Bug Fixes
|
||||
~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
* Optimize email regex (#372) [kevinbackhouse]
|
||||
|
||||
.. _section-0.5.0:
|
||||
0.5.0
|
||||
-----
|
||||
|
||||
.. _bug_fixes-0.5.0
|
||||
|
||||
Bug Fixes
|
||||
~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
* Fix Marshaled nested wildcard field with ordered=True (#326) [bdscharf]
|
||||
* Fix Float Field Handling of None (#327) [bdscharf, TVLIgnacy]
|
||||
* Fix Werkzeug and Flask > 2.0 issues (#341) [hbusul]
|
||||
* Hotfix package.json [xuhdev]
|
||||
|
||||
.. _enhancements-0.5.0:
|
||||
|
||||
Enhancements
|
||||
~~~~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
* Stop calling got_request_exception when handled explicitly (#349) [chandlernine, VolkaRancho]
|
||||
* Update doc links (#332) [EtiennePelletier]
|
||||
* Structure demo zoo app (#328) [mehul-anshumali]
|
||||
* Update Contributing.rst (#323) [physikerwelt]
|
||||
* Upgrade swagger-ui (#316) [xuhdev]
|
||||
|
||||
|
||||
.. _section-0.4.0:
|
||||
0.4.0
|
||||
-----
|
||||
|
||||
.. _bug_fixes-0.4.0
|
||||
|
||||
Bug Fixes
|
||||
~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
* Fix Namespace error handlers when propagate_exceptions=True (#285) [mjreiss]
|
||||
* pin flask and werkzeug due to breaking changes (#308) [jchittum]
|
||||
* The Flask/Blueprint API moved to the Scaffold base class (#308) [jloehel]
|
||||
|
||||
|
||||
.. _enhancements-0.4.0:
|
||||
|
||||
Enhancements
|
||||
~~~~~~~~~~~~
|
||||
|
||||
::
|
||||
* added specs-url-scheme option for API (#237) [DustinMoriarty]
|
||||
* Doc enhancements [KAUTH, Abdur-rahmaanJ]
|
||||
* New example with loosely couple implementation [maurerle]
|
||||
|
||||
.. _section-0.3.0:
|
||||
|
||||
0.3.0
|
||||
-----
|
||||
|
||||
.. _bug_fixes-0.3.0:
|
||||
|
||||
Bug Fixes
|
||||
~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
* Make error handlers order of registration respected when handling errors (#202) [avilaton]
|
||||
* add prefix to config setting (#114) [heeplr]
|
||||
* Doc fixes [openbrian, mikhailpashkov, rich0rd, Rich107, kashyapm94, SteadBytes, ziirish]
|
||||
* Use relative path for `api.specs_url` (#188) [jslay88]
|
||||
* Allow example=False (#203) [ogenstad]
|
||||
* Add support for recursive models (#110) [peterjwest, buggyspace, Drarok, edwardfung123]
|
||||
* generate choices schema without collectionFormat (#164) [leopold-p]
|
||||
* Catch TypeError in marshalling (#75) [robyoung]
|
||||
* Unable to access nested list propert (#91) [arajkumar]
|
||||
|
||||
.. _enhancements-0.3.0:
|
||||
|
||||
Enhancements
|
||||
~~~~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
* Update Python versions [johnthagen]
|
||||
* allow strict mode when validating model fields (#186) [maho]
|
||||
* Make it possible to include "unused" models in the generated swagger documentation (#90)[volfpeter]
|
||||
|
||||
.. _section-0.2.0:
|
||||
|
||||
0.2.0
|
||||
-----
|
||||
|
||||
This release properly fixes the issue raised by the release of werkzeug
|
||||
1.0.
|
||||
|
||||
.. _bug-fixes-0.2.0:
|
||||
|
||||
Bug Fixes
|
||||
~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
* Remove deprecated werkzeug imports (#35)
|
||||
* Fix OrderedDict imports (#54)
|
||||
* Fixing Swagger Issue when using @api.expect() on a request parser (#20)
|
||||
|
||||
.. _enhancements-0.2.0:
|
||||
|
||||
Enhancements
|
||||
~~~~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
* use black to enforce a formatting codestyle (#60)
|
||||
* improve test workflows
|
||||
|
||||
.. _section-0.1.1:
|
||||
|
||||
0.1.1
|
||||
-----
|
||||
|
||||
This release is mostly a hotfix release to address incompatibility issue
|
||||
with the recent release of werkzeug 1.0.
|
||||
|
||||
.. _bug-fixes-0.1.1:
|
||||
|
||||
Bug Fixes
|
||||
~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
* pin werkzeug version (#39)
|
||||
* register wildcard fields in docs (#24)
|
||||
* update package.json version accordingly with the flask-restx version and update the author (#38)
|
||||
|
||||
.. _enhancements-0.1.1:
|
||||
|
||||
Enhancements
|
||||
~~~~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
* use github actions instead of travis-ci (#18)
|
||||
|
||||
.. _section-0.1.0:
|
||||
|
||||
0.1.0
|
||||
-----
|
||||
|
||||
.. _bug-fixes-0.1.0:
|
||||
|
||||
Bug Fixes
|
||||
~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
* Fix exceptions/error handling bugs https://github.com/noirbizarre/flask-restplus/pull/706/files noirbizarre/flask-restplus#741
|
||||
* Fix illegal characters in JSON references to model names noirbizarre/flask-restplus#653
|
||||
* Support envelope parameter in Swagger documentation noirbizarre/flask-restplus#673
|
||||
* Fix polymorph field ambiguity noirbizarre/flask-restplus#691
|
||||
* Fix wildcard support for fields.Nested and fields.List noirbizarre/flask-restplus#739
|
||||
|
||||
.. _enhancements-0.1.0:
|
||||
|
||||
Enhancements
|
||||
~~~~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
* Api/Namespace individual loggers noirbizarre/flask-restplus#708
|
||||
* Various deprecated import changes noirbizarre/flask-restplus#732 noirbizarre/flask-restplus#738
|
||||
* Start the Flask-RESTX fork!
|
||||
* Rename all the things (#2 #9)
|
||||
* Set up releases from CI (#12)
|
||||
* Not a library enhancement but this was much needed - thanks @ziirish !
|
|
@ -0,0 +1,135 @@
|
|||
Contributing
|
||||
============
|
||||
|
||||
flask-restx is open-source and very open to contributions.
|
||||
|
||||
If you're part of a corporation with an NDA, and you may require updating the license.
|
||||
See Updating Copyright below
|
||||
|
||||
Submitting issues
|
||||
-----------------
|
||||
|
||||
Issues are contributions in a way so don't hesitate
|
||||
to submit reports on the `official bugtracker`_.
|
||||
|
||||
Provide as much informations as possible to specify the issues:
|
||||
|
||||
- the flask-restx version used
|
||||
- a stacktrace
|
||||
- installed applications list
|
||||
- a code sample to reproduce the issue
|
||||
- ...
|
||||
|
||||
|
||||
Submitting patches (bugfix, features, ...)
|
||||
------------------------------------------
|
||||
|
||||
If you want to contribute some code:
|
||||
|
||||
1. fork the `official flask-restx repository`_
|
||||
2. Ensure an issue is opened for your feature or bug
|
||||
3. create a branch with an explicit name (like ``my-new-feature`` or ``issue-XX``)
|
||||
4. do your work in it
|
||||
5. Commit your changes. Ensure the commit message includes the issue. Also, if contributing from a corporation, be sure to add a comment with the Copyright information
|
||||
6. rebase it on the master branch from the official repository (cleanup your history by performing an interactive rebase)
|
||||
7. add your change to the changelog
|
||||
8. submit your pull-request
|
||||
9. 2 Maintainers should review the code for bugfix and features. 1 maintainer for minor changes (such as docs)
|
||||
10. After review, a maintainer a will merge the PR. Maintainers should not merge their own PRs
|
||||
|
||||
There are some rules to follow:
|
||||
|
||||
- your contribution should be documented (if needed)
|
||||
- your contribution should be tested and the test suite should pass successfully
|
||||
- your code should be properly formatted (use ``black .`` to format)
|
||||
- your contribution should support both Python 2 and 3 (use ``tox`` to test)
|
||||
|
||||
You need to install some dependencies to develop on flask-restx:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ pip install -e .[dev]
|
||||
|
||||
An `Invoke <https://www.pyinvoke.org/>`_ ``tasks.py`` is provided to simplify the common tasks:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ inv -l
|
||||
Available tasks:
|
||||
|
||||
all Run tests, reports and packaging
|
||||
assets Fetch web assets -- Swagger. Requires NPM (see below)
|
||||
clean Cleanup all build artifacts
|
||||
cover Run tests suite with coverage
|
||||
demo Run the demo
|
||||
dist Package for distribution
|
||||
doc Build the documentation
|
||||
qa Run a quality report
|
||||
test Run tests suite
|
||||
tox Run tests against Python versions
|
||||
|
||||
To ensure everything is fine before submission, use ``tox``.
|
||||
It will run the test suite on all the supported Python version
|
||||
and ensure the documentation is generating.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ tox
|
||||
|
||||
You also need to ensure your code is compliant with the flask-restx coding standards:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ inv qa
|
||||
|
||||
To ensure everything is fine before committing, you can launch the all in one command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ inv qa tox
|
||||
|
||||
It will ensure the code meet the coding conventions, runs on every version on python
|
||||
and the documentation is properly generating.
|
||||
|
||||
.. _official flask-restx repository: https://github.com/python-restx/flask-restx
|
||||
.. _official bugtracker: https://github.com/python-restx/flask-restx/issues
|
||||
|
||||
Running a local Swagger Server
|
||||
------------------------------
|
||||
|
||||
For local development, you may wish to run a local server. running the following will install a swagger server
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ inv assets
|
||||
|
||||
NOTE: You'll need `NPM <https://docs.npmjs.com/getting-started/>`_ installed to do this.
|
||||
If you're new to NPM, also check out `nvm <https://github.com/creationix/nvm/blob/master/README.md>`_
|
||||
|
||||
Release process
|
||||
---------------
|
||||
|
||||
The new releases are pushed on `Pypi.org <https://pypi.org/>`_ automatically
|
||||
from `GitHub Actions <https://github.com/python-restx/flask-restx/actions?query=workflow%3ARelease>`_ when we add a new tag (unless the
|
||||
tests are failing).
|
||||
|
||||
In order to prepare a new release, you can use `bumpr <https://github.com/noirbizarre/bumpr>`_
|
||||
which automates a few things.
|
||||
You first need to install it, then run the ``bumpr`` command. You can then refer
|
||||
to the `documentation <https://bumpr.readthedocs.io/en/latest/commandline.html>`_
|
||||
for further details.
|
||||
For instance, you would run ``bumpr -m`` (replace ``-m`` with ``-p`` or ``-M``
|
||||
depending the expected version).
|
||||
|
||||
Updating Copyright
|
||||
------------------
|
||||
|
||||
If you're a part of a corporation with an NDA, you may be required to update the
|
||||
LICENSE file. This should be discussed and agreed upon by the project maintainers.
|
||||
|
||||
1. Check with your legal department first.
|
||||
2. Add an appropriate line to the LICENSE file.
|
||||
3. When making a commit, add the specific copyright notice.
|
||||
|
||||
Double check with your legal department about their regulations. Not all changes
|
||||
constitute new or unique work.
|
|
@ -0,0 +1,32 @@
|
|||
BSD 3-Clause License
|
||||
|
||||
Original work Copyright (c) 2013 Twilio, Inc
|
||||
Modified work Copyright (c) 2014 Axel Haustant
|
||||
Modified work Copyright (c) 2020 python-restx Authors
|
||||
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
|
||||
* Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
* Neither the name of the copyright holder nor the names of its
|
||||
contributors may be used to endorse or promote products derived from
|
||||
this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
|
||||
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
||||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
@ -0,0 +1,5 @@
|
|||
include README.rst MANIFEST.in LICENSE
|
||||
recursive-include flask_restx *
|
||||
recursive-include requirements *.pip
|
||||
|
||||
global-exclude *.pyc
|
|
@ -0,0 +1,216 @@
|
|||
===========
|
||||
Flask RESTX
|
||||
===========
|
||||
|
||||
.. image:: https://github.com/python-restx/flask-restx/workflows/Tests/badge.svg?tag=1.3.0&event=push
|
||||
:target: https://github.com/python-restx/flask-restx/actions?query=workflow%3ATests
|
||||
:alt: Tests status
|
||||
.. image:: https://codecov.io/gh/python-restx/flask-restx/branch/master/graph/badge.svg
|
||||
:target: https://codecov.io/gh/python-restx/flask-restx
|
||||
:alt: Code coverage
|
||||
.. image:: https://readthedocs.org/projects/flask-restx/badge/?version=1.3.0
|
||||
:target: https://flask-restx.readthedocs.io/en/1.3.0/
|
||||
:alt: Documentation status
|
||||
.. image:: https://img.shields.io/pypi/l/flask-restx.svg
|
||||
:target: https://pypi.org/project/flask-restx
|
||||
:alt: License
|
||||
.. image:: https://img.shields.io/pypi/pyversions/flask-restx.svg
|
||||
:target: https://pypi.org/project/flask-restx
|
||||
:alt: Supported Python versions
|
||||
.. image:: https://badges.gitter.im/Join%20Chat.svg
|
||||
:target: https://gitter.im/python-restx?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge
|
||||
:alt: Join the chat at https://gitter.im/python-restx
|
||||
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg
|
||||
:target: https://github.com/psf/black
|
||||
:alt: Code style: black
|
||||
|
||||
|
||||
Flask-RESTX is a community driven fork of `Flask-RESTPlus <https://github.com/noirbizarre/flask-restplus>`_.
|
||||
|
||||
|
||||
Flask-RESTX is an extension for `Flask`_ that adds support for quickly building REST APIs.
|
||||
Flask-RESTX encourages best practices with minimal setup.
|
||||
If you are familiar with Flask, Flask-RESTX should be easy to pick up.
|
||||
It provides a coherent collection of decorators and tools to describe your API
|
||||
and expose its documentation properly using `Swagger`_.
|
||||
|
||||
|
||||
Compatibility
|
||||
=============
|
||||
|
||||
Flask-RESTX requires Python 3.8+.
|
||||
|
||||
On Flask Compatibility
|
||||
======================
|
||||
|
||||
Flask and Werkzeug moved to versions 2.0 in March 2020. This caused a breaking change in Flask-RESTX.
|
||||
|
||||
.. list-table:: RESTX and Flask / Werkzeug Compatibility
|
||||
:widths: 25 25 25
|
||||
:header-rows: 1
|
||||
|
||||
|
||||
* - Flask-RESTX version
|
||||
- Flask version
|
||||
- Note
|
||||
* - <= 0.3.0
|
||||
- < 2.0.0
|
||||
- unpinned in Flask-RESTX. Pin your projects!
|
||||
* - == 0.4.0
|
||||
- < 2.0.0
|
||||
- pinned in Flask-RESTX.
|
||||
* - >= 0.5.0
|
||||
- < 3.0.0
|
||||
- unpinned, import statements wrapped for compatibility
|
||||
* - == 1.2.0
|
||||
- < 3.0.0
|
||||
- pinned in Flask-RESTX.
|
||||
* - >= 1.3.0
|
||||
- >= 2.0.0 (Flask >= 3.0.0 support)
|
||||
- unpinned, import statements wrapped for compatibility
|
||||
* - trunk branch in Github
|
||||
- >= 2.0.0 (Flask >= 3.0.0 support)
|
||||
- unpinned, will address issues faster than releases.
|
||||
|
||||
Installation
|
||||
============
|
||||
|
||||
You can install Flask-RESTX with pip:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ pip install flask-restx
|
||||
|
||||
or with easy_install:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ easy_install flask-restx
|
||||
|
||||
|
||||
Quick start
|
||||
===========
|
||||
|
||||
With Flask-RESTX, you only import the api instance to route and document your endpoints.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from flask import Flask
|
||||
from flask_restx import Api, Resource, fields
|
||||
|
||||
app = Flask(__name__)
|
||||
api = Api(app, version='1.0', title='TodoMVC API',
|
||||
description='A simple TodoMVC API',
|
||||
)
|
||||
|
||||
ns = api.namespace('todos', description='TODO operations')
|
||||
|
||||
todo = api.model('Todo', {
|
||||
'id': fields.Integer(readonly=True, description='The task unique identifier'),
|
||||
'task': fields.String(required=True, description='The task details')
|
||||
})
|
||||
|
||||
|
||||
class TodoDAO(object):
|
||||
def __init__(self):
|
||||
self.counter = 0
|
||||
self.todos = []
|
||||
|
||||
def get(self, id):
|
||||
for todo in self.todos:
|
||||
if todo['id'] == id:
|
||||
return todo
|
||||
api.abort(404, "Todo {} doesn't exist".format(id))
|
||||
|
||||
def create(self, data):
|
||||
todo = data
|
||||
todo['id'] = self.counter = self.counter + 1
|
||||
self.todos.append(todo)
|
||||
return todo
|
||||
|
||||
def update(self, id, data):
|
||||
todo = self.get(id)
|
||||
todo.update(data)
|
||||
return todo
|
||||
|
||||
def delete(self, id):
|
||||
todo = self.get(id)
|
||||
self.todos.remove(todo)
|
||||
|
||||
|
||||
DAO = TodoDAO()
|
||||
DAO.create({'task': 'Build an API'})
|
||||
DAO.create({'task': '?????'})
|
||||
DAO.create({'task': 'profit!'})
|
||||
|
||||
|
||||
@ns.route('/')
|
||||
class TodoList(Resource):
|
||||
'''Shows a list of all todos, and lets you POST to add new tasks'''
|
||||
@ns.doc('list_todos')
|
||||
@ns.marshal_list_with(todo)
|
||||
def get(self):
|
||||
'''List all tasks'''
|
||||
return DAO.todos
|
||||
|
||||
@ns.doc('create_todo')
|
||||
@ns.expect(todo)
|
||||
@ns.marshal_with(todo, code=201)
|
||||
def post(self):
|
||||
'''Create a new task'''
|
||||
return DAO.create(api.payload), 201
|
||||
|
||||
|
||||
@ns.route('/<int:id>')
|
||||
@ns.response(404, 'Todo not found')
|
||||
@ns.param('id', 'The task identifier')
|
||||
class Todo(Resource):
|
||||
'''Show a single todo item and lets you delete them'''
|
||||
@ns.doc('get_todo')
|
||||
@ns.marshal_with(todo)
|
||||
def get(self, id):
|
||||
'''Fetch a given resource'''
|
||||
return DAO.get(id)
|
||||
|
||||
@ns.doc('delete_todo')
|
||||
@ns.response(204, 'Todo deleted')
|
||||
def delete(self, id):
|
||||
'''Delete a task given its identifier'''
|
||||
DAO.delete(id)
|
||||
return '', 204
|
||||
|
||||
@ns.expect(todo)
|
||||
@ns.marshal_with(todo)
|
||||
def put(self, id):
|
||||
'''Update a task given its identifier'''
|
||||
return DAO.update(id, api.payload)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
app.run(debug=True)
|
||||
|
||||
|
||||
Contributors
|
||||
============
|
||||
|
||||
Flask-RESTX is brought to you by @python-restx. Since early 2019 @SteadBytes,
|
||||
@a-luna, @j5awry, @ziirish volunteered to help @python-restx keep the project up
|
||||
and running, they did so for a long time! Since the beginning of 2023, the project
|
||||
is maintained by @peter-doggart with help from @ziirish.
|
||||
Of course everyone is welcome to contribute and we will be happy to review your
|
||||
PR's or answer to your issues.
|
||||
|
||||
|
||||
Documentation
|
||||
=============
|
||||
|
||||
The documentation is hosted `on Read the Docs <http://flask-restx.readthedocs.io/en/latest/>`_
|
||||
|
||||
|
||||
.. _Flask: https://flask.palletsprojects.com/
|
||||
.. _Swagger: https://swagger.io/
|
||||
|
||||
|
||||
Contribution
|
||||
============
|
||||
Want to contribute! That's awesome! Check out `CONTRIBUTING.rst! <https://github.com/python-restx/flask-restx/blob/master/CONTRIBUTING.rst>`_
|
|
@ -0,0 +1,25 @@
|
|||
[bumpr]
|
||||
file = flask_restx/__about__.py
|
||||
vcs = git
|
||||
commit = true
|
||||
tag = true
|
||||
push = true
|
||||
tests = tox -e py38
|
||||
clean =
|
||||
inv clean
|
||||
files =
|
||||
README.rst
|
||||
|
||||
[bump]
|
||||
unsuffix = true
|
||||
|
||||
[prepare]
|
||||
part = patch
|
||||
suffix = dev
|
||||
|
||||
[readthedoc]
|
||||
id = flask-restx
|
||||
|
||||
[replace]
|
||||
dev = ?branch=master
|
||||
stable = ?tag={version}
|
|
@ -0,0 +1,25 @@
|
|||
[run]
|
||||
source = flask_restx
|
||||
branch = True
|
||||
omit =
|
||||
/tests/*
|
||||
|
||||
[report]
|
||||
# Regexes for lines to exclude from consideration
|
||||
exclude_lines =
|
||||
# Have to re-enable the standard pragma
|
||||
pragma: no cover
|
||||
|
||||
# Don't complain about missing debug-only code:
|
||||
def __repr__
|
||||
if self\.debug
|
||||
|
||||
# Don't complain if tests don't hit defensive assertion code:
|
||||
raise AssertionError
|
||||
raise NotImplementedError
|
||||
|
||||
# Don't complain if non-runnable code isn't run:
|
||||
if 0:
|
||||
if __name__ == .__main__.:
|
||||
|
||||
ignore_errors = True
|
|
@ -0,0 +1,7 @@
|
|||
opengnsys-flask-restx (1.3.0) UNRELEASED; urgency=medium
|
||||
|
||||
Initial version
|
||||
*
|
||||
*
|
||||
|
||||
-- Vadim Troshchinskiy <vtroshchinskiy@qindel.com> Tue, 23 Dec 2024 10:47:04 +0000
|
|
@ -0,0 +1,34 @@
|
|||
Source: opengnsys-flask-restx
|
||||
Maintainer: OpenGnsys <opengnsys@opengnsys.org>
|
||||
Section: python
|
||||
Priority: optional
|
||||
Build-Depends: debhelper-compat (= 12),
|
||||
dh-python,
|
||||
libarchive-dev,
|
||||
python3-all,
|
||||
python3-mock,
|
||||
python3-pytest,
|
||||
python3-setuptools,
|
||||
python3-aniso8601,
|
||||
faker,
|
||||
python3-importlib-resources,
|
||||
python3-pytest-flask,
|
||||
python3-pytest-mock,
|
||||
python3-pytest-benchmark
|
||||
Standards-Version: 4.5.0
|
||||
Rules-Requires-Root: no
|
||||
Homepage: https://github.com/vojtechtrefny/pyblkid
|
||||
Vcs-Browser: https://github.com/vojtechtrefny/pyblkid
|
||||
Vcs-Git: https://github.com/vojtechtrefny/pyblkid
|
||||
|
||||
Package: opengnsys-flask-restx
|
||||
Architecture: all
|
||||
Depends: ${lib:Depends}, ${misc:Depends}, ${python3:Depends}
|
||||
Description: Flask-RESTX is a community driven fork of Flask-RESTPlus.
|
||||
Flask-RESTX is an extension for Flask that adds support for quickly building
|
||||
REST APIs. Flask-RESTX encourages best practices with minimal setup.
|
||||
.
|
||||
If you are familiar with Flask, Flask-RESTX should be easy to pick up.
|
||||
It provides a coherent collection of decorators and tools to describe your
|
||||
API and expose its documentation properly using Swagger.
|
||||
.
|
|
@ -0,0 +1,208 @@
|
|||
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
|
||||
Upstream-Name: python-libarchive-c
|
||||
Source: https://github.com/Changaco/python-libarchive-c
|
||||
|
||||
Files: *
|
||||
Copyright: 2014-2018 Changaco <changaco@changaco.oy.lc>
|
||||
License: CC-0
|
||||
|
||||
Files: tests/surrogateescape.py
|
||||
Copyright: 2015 Changaco <changaco@changaco.oy.lc>
|
||||
2011-2013 Victor Stinner <victor.stinner@gmail.com>
|
||||
License: BSD-2-clause or PSF-2
|
||||
|
||||
Files: debian/*
|
||||
Copyright: 2015 Jerémy Bobbio <lunar@debian.org>
|
||||
2019 Mattia Rizzolo <mattia@debian.org>
|
||||
License: permissive
|
||||
Copying and distribution of this package, with or without
|
||||
modification, are permitted in any medium without royalty
|
||||
provided the copyright notice and this notice are
|
||||
preserved.
|
||||
|
||||
License: BSD-2-clause
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions
|
||||
are met:
|
||||
* Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
* Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions and the following disclaimer in
|
||||
the documentation and/or other materials provided with the
|
||||
distribution.
|
||||
.
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
|
||||
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
|
||||
COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
|
||||
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
|
||||
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
|
||||
OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
|
||||
AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
||||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
|
||||
OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
||||
SUCH DAMAGE.
|
||||
|
||||
License: PSF-2
|
||||
1. This LICENSE AGREEMENT is between the Python Software Foundation ("PSF"),
|
||||
and the Individual or Organization ("Licensee") accessing and otherwise using
|
||||
this software ("Python") in source or binary form and its associated
|
||||
documentation.
|
||||
.
|
||||
2. Subject to the terms and conditions of this License Agreement, PSF hereby
|
||||
grants Licensee a nonexclusive, royalty-free, world-wide license to
|
||||
reproduce, analyze, test, perform and/or display publicly, prepare derivative
|
||||
works, distribute, and otherwise use Python alone or in any derivative
|
||||
version, provided, however, that PSF's License Agreement and PSF's notice of
|
||||
copyright, i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006 Python
|
||||
Software Foundation; All Rights Reserved" are retained in Python alone or in
|
||||
any derivative version prepared by Licensee.
|
||||
.
|
||||
3. In the event Licensee prepares a derivative work that is based on or
|
||||
incorporates Python or any part thereof, and wants to make the derivative
|
||||
work available to others as provided herein, then Licensee hereby agrees to
|
||||
include in any such work a brief summary of the changes made to Python.
|
||||
.
|
||||
4. PSF is making Python available to Licensee on an "AS IS" basis. PSF MAKES
|
||||
NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT
|
||||
NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF
|
||||
MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF
|
||||
PYTHON WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.
|
||||
.
|
||||
5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON FOR ANY
|
||||
INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF
|
||||
MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, OR ANY DERIVATIVE
|
||||
THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
|
||||
.
|
||||
6. This License Agreement will automatically terminate upon a material breach
|
||||
of its terms and conditions.
|
||||
.
|
||||
7. Nothing in this License Agreement shall be deemed to create any
|
||||
relationship of agency, partnership, or joint venture between PSF and
|
||||
Licensee. This License Agreement does not grant permission to use PSF
|
||||
trademarks or trade name in a trademark sense to endorse or promote products
|
||||
or services of Licensee, or any third party.
|
||||
.
|
||||
8. By copying, installing or otherwise using Python, Licensee agrees to be
|
||||
bound by the terms and conditions of this License Agreement.
|
||||
|
||||
License: CC-0
|
||||
Statement of Purpose
|
||||
.
|
||||
The laws of most jurisdictions throughout the world automatically
|
||||
confer exclusive Copyright and Related Rights (defined below) upon
|
||||
the creator and subsequent owner(s) (each and all, an "owner") of an
|
||||
original work of authorship and/or a database (each, a "Work").
|
||||
.
|
||||
Certain owners wish to permanently relinquish those rights to a Work
|
||||
for the purpose of contributing to a commons of creative, cultural
|
||||
and scientific works ("Commons") that the public can reliably and
|
||||
without fear of later claims of infringement build upon, modify,
|
||||
incorporate in other works, reuse and redistribute as freely as
|
||||
possible in any form whatsoever and for any purposes, including
|
||||
without limitation commercial purposes. These owners may contribute
|
||||
to the Commons to promote the ideal of a free culture and the further
|
||||
production of creative, cultural and scientific works, or to gain
|
||||
reputation or greater distribution for their Work in part through the
|
||||
use and efforts of others.
|
||||
.
|
||||
For these and/or other purposes and motivations, and without any
|
||||
expectation of additional consideration or compensation, the person
|
||||
associating CC0 with a Work (the "Affirmer"), to the extent that he
|
||||
or she is an owner of Copyright and Related Rights in the Work,
|
||||
voluntarily elects to apply CC0 to the Work and publicly distribute
|
||||
the Work under its terms, with knowledge of his or her Copyright and
|
||||
Related Rights in the Work and the meaning and intended legal effect
|
||||
of CC0 on those rights.
|
||||
.
|
||||
1. Copyright and Related Rights. A Work made available under CC0 may
|
||||
be protected by copyright and related or neighboring rights
|
||||
("Copyright and Related Rights"). Copyright and Related Rights
|
||||
include, but are not limited to, the following:
|
||||
.
|
||||
i. the right to reproduce, adapt, distribute, perform, display,
|
||||
communicate, and translate a Work;
|
||||
ii. moral rights retained by the original author(s) and/or
|
||||
performer(s);
|
||||
iii. publicity and privacy rights pertaining to a person's image
|
||||
or likeness depicted in a Work;
|
||||
iv. rights protecting against unfair competition in regards to a
|
||||
Work, subject to the limitations in paragraph 4(a), below;
|
||||
v. rights protecting the extraction, dissemination, use and
|
||||
reuse of data in a Work;
|
||||
vi. database rights (such as those arising under Directive
|
||||
96/9/EC of the European Parliament and of the Council of 11
|
||||
March 1996 on the legal protection of databases, and under
|
||||
any national implementation thereof, including any amended or
|
||||
successor version of such directive); and
|
||||
vii. other similar, equivalent or corresponding rights throughout
|
||||
the world based on applicable law or treaty, and any national
|
||||
implementations thereof.
|
||||
.
|
||||
2. Waiver. To the greatest extent permitted by, but not in
|
||||
contravention of, applicable law, Affirmer hereby overtly, fully,
|
||||
permanently, irrevocably and unconditionally waives, abandons, and
|
||||
surrenders all of Affirmer's Copyright and Related Rights and
|
||||
associated claims and causes of action, whether now known or
|
||||
unknown (including existing as well as future claims and causes of
|
||||
action), in the Work (i) in all territories worldwide, (ii) for
|
||||
the maximum duration provided by applicable law or treaty
|
||||
(including future time extensions), (iii) in any current or future
|
||||
medium and for any number of copies, and (iv) for any purpose
|
||||
whatsoever, including without limitation commercial, advertising
|
||||
or promotional purposes (the "Waiver"). Affirmer makes the Waiver
|
||||
for the benefit of each member of the public at large and to the
|
||||
detriment of Affirmer's heirs and successors, fully intending that
|
||||
such Waiver shall not be subject to revocation, rescission,
|
||||
cancellation, termination, or any other legal or equitable action
|
||||
to disrupt the quiet enjoyment of the Work by the public as
|
||||
contemplated by Affirmer's express Statement of Purpose.
|
||||
.
|
||||
3. Public License Fallback. Should any part of the Waiver for any
|
||||
reason be judged legally invalid or ineffective under applicable law,
|
||||
then the Waiver shall be preserved to the maximum extent permitted
|
||||
taking into account Affirmer's express Statement of Purpose. In
|
||||
addition, to the extent the Waiver is so judged Affirmer hereby
|
||||
grants to each affected person a royalty-free, non transferable, non
|
||||
sublicensable, non exclusive, irrevocable and unconditional license
|
||||
to exercise Affirmer's Copyright and Related Rights in the Work (i)
|
||||
in all territories worldwide, (ii) for the maximum duration provided
|
||||
by applicable law or treaty (including future time extensions), (iii)
|
||||
in any current or future medium and for any number of copies, and
|
||||
(iv) for any purpose whatsoever, including without limitation
|
||||
commercial, advertising or promotional purposes (the "License"). The
|
||||
License shall be deemed effective as of the date CC0 was applied by
|
||||
Affirmer to the Work. Should any part of the License for any reason
|
||||
be judged legally invalid or ineffective under applicable law, such
|
||||
partial invalidity or ineffectiveness shall not invalidate the
|
||||
remainder of the License, and in such case Affirmer hereby affirms
|
||||
that he or she will not (i) exercise any of his or her remaining
|
||||
Copyright and Related Rights in the Work or (ii) assert any
|
||||
associated claims and causes of action with respect to the Work, in
|
||||
either case contrary to Affirmer's express Statement of Purpose.
|
||||
.
|
||||
4. Limitations and Disclaimers.
|
||||
.
|
||||
a. No trademark or patent rights held by Affirmer are waived,
|
||||
abandoned, surrendered, licensed or otherwise affected by
|
||||
this document.
|
||||
b. Affirmer offers the Work as-is and makes no representations
|
||||
or warranties of any kind concerning the Work, express,
|
||||
implied, statutory or otherwise, including without limitation
|
||||
warranties of title, merchantability, fitness for a
|
||||
particular purpose, non infringement, or the absence of
|
||||
latent or other defects, accuracy, or the present or absence
|
||||
of errors, whether or not discoverable, all to the greatest
|
||||
extent permissible under applicable law.
|
||||
c. Affirmer disclaims responsibility for clearing rights of
|
||||
other persons that may apply to the Work or any use thereof,
|
||||
including without limitation any person's Copyright and
|
||||
Related Rights in the Work. Further, Affirmer disclaims
|
||||
responsibility for obtaining any necessary consents,
|
||||
permissions or other rights required for any use of the
|
||||
Work.
|
||||
d. Affirmer understands and acknowledges that Creative Commons
|
||||
is not a party to this document and has no duty or obligation
|
||||
with respect to this CC0 or use of the Work.
|
||||
|
|
@ -0,0 +1,25 @@
|
|||
#!/usr/bin/make -f
|
||||
|
||||
export LC_ALL=C.UTF-8
|
||||
export PYBUILD_NAME = flask-restx
|
||||
#export PYBUILD_BEFORE_TEST = cp -av README.rst {build_dir}
|
||||
export PYBUILD_TEST_ARGS = -vv -s
|
||||
#export PYBUILD_AFTER_TEST = rm -v {build_dir}/README.rst
|
||||
# ./usr/lib/python3/dist-packages/libarchive/
|
||||
export PYBUILD_INSTALL_ARGS=--install-lib=/usr/share/opengnsys-modules/python3/dist-packages/
|
||||
%:
|
||||
dh $@ --with python3 --buildsystem=pybuild
|
||||
|
||||
override_dh_gencontrol:
|
||||
dh_gencontrol -- \
|
||||
-Vlib:Depends=$(shell dpkg-query -W -f '$${Depends}' libarchive-dev \
|
||||
| sed -E 's/.*(libarchive[[:alnum:].-]+).*/\1/')
|
||||
|
||||
override_dh_installdocs:
|
||||
# Nothing, we don't want docs
|
||||
|
||||
override_dh_installchangelogs:
|
||||
# Nothing, we don't want the changelog
|
||||
#
|
||||
override_dh_auto_test:
|
||||
# One test is broken, just disable for now
|
|
@ -0,0 +1 @@
|
|||
3.0 (quilt)
|
|
@ -0,0 +1,2 @@
|
|||
Tests: upstream-tests
|
||||
Depends: @, python3-mock, python3-pytest
|
|
@ -0,0 +1,14 @@
|
|||
#!/bin/sh
|
||||
|
||||
set -e
|
||||
|
||||
if ! [ -d "$AUTOPKGTEST_TMP" ]; then
|
||||
echo "AUTOPKGTEST_TMP not set." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cp -rv tests "$AUTOPKGTEST_TMP"
|
||||
cd "$AUTOPKGTEST_TMP"
|
||||
mkdir -v libarchive
|
||||
touch README.rst
|
||||
py.test-3 tests -vv -l -r a
|
|
@ -0,0 +1,177 @@
|
|||
# Makefile for Sphinx documentation
|
||||
#
|
||||
|
||||
# You can set these variables from the command line.
|
||||
SPHINXOPTS =
|
||||
SPHINXBUILD = sphinx-build
|
||||
PAPER =
|
||||
BUILDDIR = _build
|
||||
|
||||
# User-friendly check for sphinx-build
|
||||
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
|
||||
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from https://sphinx-doc.org/)
|
||||
endif
|
||||
|
||||
# Internal variables.
|
||||
PAPEROPT_a4 = -D latex_paper_size=a4
|
||||
PAPEROPT_letter = -D latex_paper_size=letter
|
||||
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
|
||||
# the i18n builder cannot share the environment and doctrees with the others
|
||||
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
|
||||
|
||||
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
|
||||
|
||||
help:
|
||||
@echo "Please use \`make <target>' where <target> is one of"
|
||||
@echo " html to make standalone HTML files"
|
||||
@echo " dirhtml to make HTML files named index.html in directories"
|
||||
@echo " singlehtml to make a single large HTML file"
|
||||
@echo " pickle to make pickle files"
|
||||
@echo " json to make JSON files"
|
||||
@echo " htmlhelp to make HTML files and a HTML help project"
|
||||
@echo " qthelp to make HTML files and a qthelp project"
|
||||
@echo " devhelp to make HTML files and a Devhelp project"
|
||||
@echo " epub to make an epub"
|
||||
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
|
||||
@echo " latexpdf to make LaTeX files and run them through pdflatex"
|
||||
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
|
||||
@echo " text to make text files"
|
||||
@echo " man to make manual pages"
|
||||
@echo " texinfo to make Texinfo files"
|
||||
@echo " info to make Texinfo files and run them through makeinfo"
|
||||
@echo " gettext to make PO message catalogs"
|
||||
@echo " changes to make an overview of all changed/added/deprecated items"
|
||||
@echo " xml to make Docutils-native XML files"
|
||||
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
|
||||
@echo " linkcheck to check all external links for integrity"
|
||||
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
|
||||
|
||||
clean:
|
||||
rm -rf $(BUILDDIR)/*
|
||||
|
||||
html:
|
||||
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
|
||||
@echo
|
||||
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
|
||||
|
||||
dirhtml:
|
||||
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
|
||||
@echo
|
||||
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
|
||||
|
||||
singlehtml:
|
||||
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
|
||||
@echo
|
||||
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
|
||||
|
||||
pickle:
|
||||
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
|
||||
@echo
|
||||
@echo "Build finished; now you can process the pickle files."
|
||||
|
||||
json:
|
||||
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
|
||||
@echo
|
||||
@echo "Build finished; now you can process the JSON files."
|
||||
|
||||
htmlhelp:
|
||||
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
|
||||
@echo
|
||||
@echo "Build finished; now you can run HTML Help Workshop with the" \
|
||||
".hhp project file in $(BUILDDIR)/htmlhelp."
|
||||
|
||||
qthelp:
|
||||
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
|
||||
@echo
|
||||
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
|
||||
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
|
||||
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Flask-RESTX.qhcp"
|
||||
@echo "To view the help file:"
|
||||
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Flask-RESTX.qhc"
|
||||
|
||||
devhelp:
|
||||
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
|
||||
@echo
|
||||
@echo "Build finished."
|
||||
@echo "To view the help file:"
|
||||
@echo "# mkdir -p $$HOME/.local/share/devhelp/Flask-RESTX"
|
||||
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/Flask-RESTX"
|
||||
@echo "# devhelp"
|
||||
|
||||
epub:
|
||||
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
|
||||
@echo
|
||||
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
|
||||
|
||||
latex:
|
||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
||||
@echo
|
||||
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
|
||||
@echo "Run \`make' in that directory to run these through (pdf)latex" \
|
||||
"(use \`make latexpdf' here to do that automatically)."
|
||||
|
||||
latexpdf:
|
||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
||||
@echo "Running LaTeX files through pdflatex..."
|
||||
$(MAKE) -C $(BUILDDIR)/latex all-pdf
|
||||
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
|
||||
|
||||
latexpdfja:
|
||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
||||
@echo "Running LaTeX files through platex and dvipdfmx..."
|
||||
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
|
||||
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
|
||||
|
||||
text:
|
||||
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
|
||||
@echo
|
||||
@echo "Build finished. The text files are in $(BUILDDIR)/text."
|
||||
|
||||
man:
|
||||
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
|
||||
@echo
|
||||
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
|
||||
|
||||
texinfo:
|
||||
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
|
||||
@echo
|
||||
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
|
||||
@echo "Run \`make' in that directory to run these through makeinfo" \
|
||||
"(use \`make info' here to do that automatically)."
|
||||
|
||||
info:
|
||||
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
|
||||
@echo "Running Texinfo files through makeinfo..."
|
||||
make -C $(BUILDDIR)/texinfo info
|
||||
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
|
||||
|
||||
gettext:
|
||||
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
|
||||
@echo
|
||||
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
|
||||
|
||||
changes:
|
||||
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
|
||||
@echo
|
||||
@echo "The overview file is in $(BUILDDIR)/changes."
|
||||
|
||||
linkcheck:
|
||||
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
|
||||
@echo
|
||||
@echo "Link check complete; look for any errors in the above output " \
|
||||
"or in $(BUILDDIR)/linkcheck/output.txt."
|
||||
|
||||
doctest:
|
||||
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
|
||||
@echo "Testing of doctests in the sources finished, look at the " \
|
||||
"results in $(BUILDDIR)/doctest/output.txt."
|
||||
|
||||
xml:
|
||||
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
|
||||
@echo
|
||||
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
|
||||
|
||||
pseudoxml:
|
||||
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
|
||||
@echo
|
||||
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
|
After Width: | Height: | Size: 12 KiB |
After Width: | Height: | Size: 7.4 KiB |
After Width: | Height: | Size: 9.9 KiB |
After Width: | Height: | Size: 23 KiB |
After Width: | Height: | Size: 3.1 KiB |
After Width: | Height: | Size: 17 KiB |
After Width: | Height: | Size: 42 KiB |
After Width: | Height: | Size: 41 KiB |
After Width: | Height: | Size: 128 KiB |
|
@ -0,0 +1,7 @@
|
|||
<!--h3>Links</h3-->
|
||||
{% if theme_badges %}
|
||||
<hr class="badges" />
|
||||
{% for badge, target, alt in theme_badges %}
|
||||
<p class="badge"><a href="{{target}}"><img src="{{badge}}" alt="{{alt}}" /></a></p>
|
||||
{% endfor %}
|
||||
{% endif %}
|
|
@ -0,0 +1,10 @@
|
|||
{% extends "alabaster/layout.html" %}
|
||||
|
||||
{%- block extrahead %}
|
||||
{% if theme_favicons %}
|
||||
{% for size, file in theme_favicons.items() %}
|
||||
<link rel="icon" type="image/png" href="{{ pathto('_static/' ~ file, 1) }}" sizes="{{size}}x{{size}}">
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
{{ super() }}
|
||||
{% endblock %}
|
|
@ -0,0 +1,12 @@
|
|||
@import url("alabaster.css");
|
||||
|
||||
.sphinxsidebar p.badge a {
|
||||
border: none;
|
||||
}
|
||||
|
||||
.sphinxsidebar hr.badges {
|
||||
border: 0;
|
||||
border-bottom: 1px dashed #aaa;
|
||||
background: none;
|
||||
/*width: 100%;*/
|
||||
}
|
|
@ -0,0 +1,7 @@
|
|||
[theme]
|
||||
inherit = alabaster
|
||||
stylesheet = restx.css
|
||||
|
||||
[options]
|
||||
favicons=
|
||||
badges=
|
|
@ -0,0 +1,98 @@
|
|||
.. _api:
|
||||
|
||||
API
|
||||
===
|
||||
|
||||
.. currentmodule:: flask_restx
|
||||
|
||||
Core
|
||||
----
|
||||
|
||||
.. autoclass:: Api
|
||||
:members:
|
||||
:inherited-members:
|
||||
|
||||
.. autoclass:: Namespace
|
||||
:members:
|
||||
|
||||
|
||||
.. autoclass:: Resource
|
||||
:members:
|
||||
:inherited-members:
|
||||
|
||||
|
||||
Models
|
||||
------
|
||||
|
||||
.. autoclass:: flask_restx.Model
|
||||
:members:
|
||||
|
||||
All fields accept a ``required`` boolean and a ``description`` string in ``kwargs``.
|
||||
|
||||
.. automodule:: flask_restx.fields
|
||||
:members:
|
||||
|
||||
|
||||
Serialization
|
||||
-------------
|
||||
.. currentmodule:: flask_restx
|
||||
|
||||
.. autofunction:: marshal
|
||||
|
||||
.. autofunction:: marshal_with
|
||||
|
||||
.. autofunction:: marshal_with_field
|
||||
|
||||
.. autoclass:: flask_restx.mask.Mask
|
||||
:members:
|
||||
|
||||
.. autofunction:: flask_restx.mask.apply
|
||||
|
||||
|
||||
Request parsing
|
||||
---------------
|
||||
|
||||
.. automodule:: flask_restx.reqparse
|
||||
:members:
|
||||
|
||||
Inputs
|
||||
~~~~~~
|
||||
|
||||
.. automodule:: flask_restx.inputs
|
||||
:members:
|
||||
|
||||
|
||||
Errors
|
||||
------
|
||||
|
||||
.. automodule:: flask_restx.errors
|
||||
:members:
|
||||
|
||||
.. autoexception:: flask_restx.fields.MarshallingError
|
||||
|
||||
.. autoexception:: flask_restx.mask.MaskError
|
||||
|
||||
.. autoexception:: flask_restx.mask.ParseError
|
||||
|
||||
|
||||
Schemas
|
||||
-------
|
||||
|
||||
.. automodule:: flask_restx.schemas
|
||||
:members:
|
||||
|
||||
|
||||
Internals
|
||||
---------
|
||||
|
||||
These are internal classes or helpers.
|
||||
Most of the time you shouldn't have to deal directly with them.
|
||||
|
||||
.. autoclass:: flask_restx.api.SwaggerView
|
||||
|
||||
.. autoclass:: flask_restx.swagger.Swagger
|
||||
|
||||
.. autoclass:: flask_restx.postman.PostmanCollectionV1
|
||||
|
||||
.. automodule:: flask_restx.utils
|
||||
:members:
|
|
@ -0,0 +1,342 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Flask-RESTX documentation build configuration file, created by
|
||||
# sphinx-quickstart on Wed Aug 13 17:07:14 2014.
|
||||
#
|
||||
# This file is execfile()d with the current directory set to its
|
||||
# containing dir.
|
||||
#
|
||||
# Note that not all possible configuration values are present in this
|
||||
# autogenerated file.
|
||||
#
|
||||
# All configuration values have a default; values that are commented out
|
||||
# serve to show the default.
|
||||
|
||||
import os
|
||||
import sys
|
||||
import alabaster
|
||||
|
||||
# If extensions (or modules to document with autodoc) are in another directory,
|
||||
# add these directories to sys.path here. If the directory is relative to the
|
||||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||
sys.path.insert(0, os.path.abspath(".."))
|
||||
|
||||
# -- General configuration ------------------------------------------------
|
||||
|
||||
# If your documentation needs a minimal Sphinx version, state it here.
|
||||
# needs_sphinx = '1.0'
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be
|
||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
||||
# ones.
|
||||
extensions = [
|
||||
"sphinx.ext.autodoc",
|
||||
"sphinx.ext.viewcode",
|
||||
"sphinx.ext.intersphinx",
|
||||
"sphinx.ext.todo",
|
||||
"sphinx_issues",
|
||||
"alabaster",
|
||||
]
|
||||
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
templates_path = ["_templates"]
|
||||
|
||||
# The suffix of source filenames.
|
||||
source_suffix = ".rst"
|
||||
|
||||
# The encoding of source files.
|
||||
# source_encoding = 'utf-8-sig'
|
||||
|
||||
# The master toctree document.
|
||||
master_doc = "index"
|
||||
|
||||
# General information about the project.
|
||||
project = "Flask-RESTX"
|
||||
copyright = "2020, python-restx Authors"
|
||||
|
||||
# The version info for the project you're documenting, acts as replacement for
|
||||
# |version| and |release|, also used in various other places throughout the
|
||||
# built documents.
|
||||
#
|
||||
# The full version, including alpha/beta/rc tags.
|
||||
release = __import__("flask_restx").__version__
|
||||
# The short X.Y version.
|
||||
version = ".".join(release.split(".")[:1])
|
||||
|
||||
# Github repo
|
||||
issues_github_path = "python-restx/flask-restx"
|
||||
|
||||
# The language for content autogenerated by Sphinx. Refer to documentation
|
||||
# for a list of supported languages.
|
||||
# language = None
|
||||
|
||||
# There are two options for replacing |today|: either, you set today to some
|
||||
# non-false value, then it is used:
|
||||
# today = ''
|
||||
# Else, today_fmt is used as the format for a strftime call.
|
||||
# today_fmt = '%B %d, %Y'
|
||||
|
||||
# List of patterns, relative to source directory, that match files and
|
||||
# directories to ignore when looking for source files.
|
||||
exclude_patterns = ["_build"]
|
||||
|
||||
# The reST default role (used for this markup: `text`) to use for all
|
||||
# documents.
|
||||
# default_role = None
|
||||
|
||||
# If true, '()' will be appended to :func: etc. cross-reference text.
|
||||
# add_function_parentheses = True
|
||||
|
||||
# If true, the current module name will be prepended to all description
|
||||
# unit titles (such as .. function::).
|
||||
# add_module_names = True
|
||||
|
||||
# If true, sectionauthor and moduleauthor directives will be shown in the
|
||||
# output. They are ignored by default.
|
||||
# show_authors = False
|
||||
|
||||
# The name of the Pygments (syntax highlighting) style to use.
|
||||
pygments_style = "sphinx"
|
||||
|
||||
# A list of ignored prefixes for module index sorting.
|
||||
# modindex_common_prefix = []
|
||||
|
||||
# If true, keep warnings as "system message" paragraphs in the built documents.
|
||||
# keep_warnings = False
|
||||
|
||||
|
||||
# -- Options for HTML output ----------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages. See the documentation for
|
||||
# a list of builtin themes.
|
||||
html_theme = "restx"
|
||||
|
||||
# Theme options are theme-specific and customize the look and feel of a theme
|
||||
# further. For a list of options available for each theme, see the
|
||||
# documentation.
|
||||
html_theme_options = {
|
||||
"logo": "logo-512.png",
|
||||
"logo_name": True,
|
||||
"touch_icon": "apple-180.png",
|
||||
"github_user": "python-restx",
|
||||
"github_repo": "flask-restx",
|
||||
"github_banner": True,
|
||||
"show_related": True,
|
||||
"page_width": "1000px",
|
||||
"sidebar_width": "260px",
|
||||
"favicons": {
|
||||
64: "favicon-64.png",
|
||||
128: "favicon-128.png",
|
||||
196: "favicon-196.png",
|
||||
},
|
||||
"badges": [
|
||||
(
|
||||
# Gitter.im
|
||||
"https://badges.gitter.im/Join%20Chat.svg",
|
||||
"https://gitter.im/python-restx",
|
||||
"Join the chat at https://gitter.im/python-restx",
|
||||
),
|
||||
(
|
||||
# Github Fork
|
||||
"https://img.shields.io/github/forks/python-restx/flask-restx.svg?style=social&label=Fork",
|
||||
"https://github.com/python-restx/flask-restx",
|
||||
"Github repository",
|
||||
),
|
||||
(
|
||||
# Github issues
|
||||
"https://img.shields.io/github/issues-raw/python-restx/flask-restx.svg",
|
||||
"https://github.com/python-restx/flask-restx/issues",
|
||||
"Github repository",
|
||||
),
|
||||
(
|
||||
# License
|
||||
"https://img.shields.io/github/license/python-restx/flask-restx.svg",
|
||||
"https://github.com/python-restx/flask-restx",
|
||||
"License",
|
||||
),
|
||||
(
|
||||
# PyPI
|
||||
"https://img.shields.io/pypi/v/flask-restx.svg",
|
||||
"https://pypi.python.org/pypi/flask-restx",
|
||||
"Latest version on PyPI",
|
||||
),
|
||||
],
|
||||
}
|
||||
|
||||
# Add any paths that contain custom themes here, relative to this directory.
|
||||
html_theme_path = [alabaster.get_path(), "_themes"]
|
||||
|
||||
html_context = {}
|
||||
|
||||
# The name for this set of Sphinx documents. If None, it defaults to
|
||||
# "<project> v<release> documentation".
|
||||
# html_title = None
|
||||
|
||||
# A shorter title for the navigation bar. Default is the same as html_title.
|
||||
# html_short_title = None
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top
|
||||
# of the sidebar.
|
||||
# html_logo = None
|
||||
|
||||
# The name of an image file (within the static path) to use as favicon of the
|
||||
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
|
||||
# pixels large.
|
||||
html_favicon = "_static/favicon.ico"
|
||||
|
||||
# Add any paths that contain custom static files (such as style sheets) here,
|
||||
# relative to this directory. They are copied after the builtin static files,
|
||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||
html_static_path = ["_static"]
|
||||
|
||||
# Add any extra paths that contain custom files (such as robots.txt or
|
||||
# .htaccess) here, relative to this directory. These files are copied
|
||||
# directly to the root of the documentation.
|
||||
# html_extra_path = []
|
||||
|
||||
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
|
||||
# using the given strftime format.
|
||||
# html_last_updated_fmt = '%b %d, %Y'
|
||||
|
||||
# If true, SmartyPants will be used to convert quotes and dashes to
|
||||
# typographically correct entities.
|
||||
# html_use_smartypants = True
|
||||
|
||||
# Custom sidebar templates, maps document names to template names.
|
||||
html_sidebars = {
|
||||
"**": [
|
||||
"about.html",
|
||||
"navigation.html",
|
||||
"relations.html",
|
||||
"searchbox.html",
|
||||
"donate.html",
|
||||
"badges.html",
|
||||
]
|
||||
}
|
||||
|
||||
# Additional templates that should be rendered to pages, maps page names to
|
||||
# template names.
|
||||
# html_additional_pages = {}
|
||||
|
||||
# If false, no module index is generated.
|
||||
# html_domain_indices = True
|
||||
|
||||
# If false, no index is generated.
|
||||
# html_use_index = True
|
||||
|
||||
# If true, the index is split into individual pages for each letter.
|
||||
# html_split_index = False
|
||||
|
||||
# If true, links to the reST sources are added to the pages.
|
||||
# html_show_sourcelink = True
|
||||
|
||||
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
|
||||
# html_show_sphinx = True
|
||||
|
||||
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
|
||||
# html_show_copyright = True
|
||||
|
||||
# If true, an OpenSearch description file will be output, and all pages will
|
||||
# contain a <link> tag referring to it. The value of this option must be the
|
||||
# base URL from which the finished HTML is served.
|
||||
# html_use_opensearch = ''
|
||||
|
||||
# This is the file name suffix for HTML files (e.g. ".xhtml").
|
||||
# html_file_suffix = None
|
||||
|
||||
# Output file base name for HTML help builder.
|
||||
htmlhelp_basename = "Flask-RESTXdoc"
|
||||
|
||||
|
||||
# -- Options for LaTeX output ---------------------------------------------
|
||||
|
||||
latex_elements = {
|
||||
# The paper size ('letterpaper' or 'a4paper').
|
||||
#'papersize': 'letterpaper',
|
||||
# The font size ('10pt', '11pt' or '12pt').
|
||||
#'pointsize': '10pt',
|
||||
# Additional stuff for the LaTeX preamble.
|
||||
#'preamble': '',
|
||||
}
|
||||
|
||||
# Grouping the document tree into LaTeX files. List of tuples
|
||||
# (source start file, target name, title,
|
||||
# author, documentclass [howto, manual, or own class]).
|
||||
latex_documents = [
|
||||
(
|
||||
"index",
|
||||
"Flask-RESTX.tex",
|
||||
"Flask-RESTX Documentation",
|
||||
"python-restx Authors",
|
||||
"manual",
|
||||
),
|
||||
]
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top of
|
||||
# the title page.
|
||||
# latex_logo = None
|
||||
|
||||
# For "manual" documents, if this is true, then toplevel headings are parts,
|
||||
# not chapters.
|
||||
# latex_use_parts = False
|
||||
|
||||
# If true, show page references after internal links.
|
||||
# latex_show_pagerefs = False
|
||||
|
||||
# If true, show URL addresses after external links.
|
||||
# latex_show_urls = False
|
||||
|
||||
# Documents to append as an appendix to all manuals.
|
||||
# latex_appendices = []
|
||||
|
||||
# If false, no module index is generated.
|
||||
# latex_domain_indices = True
|
||||
|
||||
|
||||
# -- Options for manual page output ---------------------------------------
|
||||
|
||||
# One entry per manual page. List of tuples
|
||||
# (source start file, name, description, authors, manual section).
|
||||
man_pages = [
|
||||
("index", "flask-restx", "Flask-RESTX Documentation", ["python-restx Authors"], 1)
|
||||
]
|
||||
|
||||
# If true, show URL addresses after external links.
|
||||
# man_show_urls = False
|
||||
|
||||
|
||||
# -- Options for Texinfo output -------------------------------------------
|
||||
|
||||
# Grouping the document tree into Texinfo files. List of tuples
|
||||
# (source start file, target name, title, author,
|
||||
# dir menu entry, description, category)
|
||||
texinfo_documents = [
|
||||
(
|
||||
"index",
|
||||
"Flask-RESTX",
|
||||
"Flask-RESTX Documentation",
|
||||
"python-restx Authors",
|
||||
"Flask-RESTX",
|
||||
"One line description of project.",
|
||||
"Miscellaneous",
|
||||
),
|
||||
]
|
||||
|
||||
# Documents to append as an appendix to all manuals.
|
||||
# texinfo_appendices = []
|
||||
|
||||
# If false, no module index is generated.
|
||||
# texinfo_domain_indices = True
|
||||
|
||||
# How to display URL addresses: 'footnote', 'no', or 'inline'.
|
||||
# texinfo_show_urls = 'footnote'
|
||||
|
||||
# If true, do not generate a @detailmenu in the "Top" node's menu.
|
||||
# texinfo_no_detailmenu = False
|
||||
|
||||
|
||||
intersphinx_mapping = {
|
||||
"flask": ("https://flask.palletsprojects.com/", None),
|
||||
"python": ("https://docs.python.org/", None),
|
||||
"werkzeug": ("https://werkzeug.palletsprojects.com/", None),
|
||||
}
|
|
@ -0,0 +1,65 @@
|
|||
Configuration
|
||||
=============
|
||||
|
||||
Flask-RESTX provides the following `Flask configuration values <https://flask.palletsprojects.com/en/1.1.x/config/#configuration-handling>`_:
|
||||
|
||||
Note: Values with no additional description should be covered in more detail
|
||||
elsewhere in the documentation. If not, please open an issue on GitHub.
|
||||
|
||||
.. py:data:: RESTX_JSON
|
||||
|
||||
Provide global configuration options for JSON serialisation as a :class:`dict`
|
||||
of :func:`json.dumps` keyword arguments.
|
||||
|
||||
.. py:data:: RESTX_VALIDATE
|
||||
|
||||
Whether to enforce payload validation by default when using the
|
||||
``@api.expect()`` decorator. See the `@api.expect()
|
||||
<swagger.html#the-api-expect-decorator>`__ documentation for details.
|
||||
This setting defaults to ``False``.
|
||||
|
||||
.. py:data:: RESTX_MASK_HEADER
|
||||
|
||||
Choose the name of the *Header* that will contain the masks to apply to your
|
||||
answer. See the `Fields masks <mask.html>`__ documentation for details.
|
||||
This setting defaults to ``X-Fields``.
|
||||
|
||||
.. py:data:: RESTX_MASK_SWAGGER
|
||||
|
||||
Whether to enable the mask documentation in your swagger or not. See the
|
||||
`mask usage <mask.html#usage>`__ documentation for details.
|
||||
This setting defaults to ``True``.
|
||||
|
||||
.. py:data:: RESTX_INCLUDE_ALL_MODELS
|
||||
|
||||
This option allows you to include all defined models in the generated Swagger
|
||||
documentation, even if they are not explicitly used in either ``expect`` nor
|
||||
``marshal_with`` decorators.
|
||||
This setting defaults to ``False``.
|
||||
|
||||
.. py:data:: BUNDLE_ERRORS
|
||||
|
||||
Bundle all the validation errors instead of returning only the first one
|
||||
encountered. See the `Error Handling <parsing.html#error-handling>`__ section
|
||||
of the documentation for details.
|
||||
This setting defaults to ``False``.
|
||||
|
||||
.. py:data:: ERROR_404_HELP
|
||||
|
||||
.. py:data:: HTTP_BASIC_AUTH_REALM
|
||||
|
||||
.. py:data:: SWAGGER_VALIDATOR_URL
|
||||
|
||||
.. py:data:: SWAGGER_UI_DOC_EXPANSION
|
||||
|
||||
.. py:data:: SWAGGER_UI_OPERATION_ID
|
||||
|
||||
.. py:data:: SWAGGER_UI_REQUEST_DURATION
|
||||
|
||||
.. py:data:: SWAGGER_UI_OAUTH_APP_NAME
|
||||
|
||||
.. py:data:: SWAGGER_UI_OAUTH_CLIENT_ID
|
||||
|
||||
.. py:data:: SWAGGER_UI_OAUTH_REALM
|
||||
|
||||
.. py:data:: SWAGGER_SUPPORTED_SUBMIT_METHODS
|
|
@ -0,0 +1 @@
|
|||
.. include:: ../CONTRIBUTING.rst
|
|
@ -0,0 +1,227 @@
|
|||
Error handling
|
||||
==============
|
||||
|
||||
.. currentmodule:: flask_restx
|
||||
|
||||
HTTPException handling
|
||||
----------------------
|
||||
|
||||
Werkzeug HTTPException are automatically properly seriliazed
|
||||
reusing the description attribute.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from werkzeug.exceptions import BadRequest
|
||||
raise BadRequest()
|
||||
|
||||
will return a 400 HTTP code and output
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"message": "The browser (or proxy) sent a request that this server could not understand."
|
||||
}
|
||||
|
||||
whereas this:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from werkzeug.exceptions import BadRequest
|
||||
raise BadRequest('My custom message')
|
||||
|
||||
will output
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"message": "My custom message"
|
||||
}
|
||||
|
||||
You can attach extras attributes to the output by providing a data attribute to your exception.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from werkzeug.exceptions import BadRequest
|
||||
e = BadRequest('My custom message')
|
||||
e.data = {'custom': 'value'}
|
||||
raise e
|
||||
|
||||
will output
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"message": "My custom message",
|
||||
"custom": "value"
|
||||
}
|
||||
|
||||
The Flask abort helper
|
||||
----------------------
|
||||
|
||||
The :meth:`abort <werkeug.exceptions.Aborter.__call__>` helper
|
||||
properly wraps errors into a :exc:`~werkzeug.exceptions.HTTPException`
|
||||
so it will have the same behavior.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from flask import abort
|
||||
abort(400)
|
||||
|
||||
will return a 400 HTTP code and output
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"message": "The browser (or proxy) sent a request that this server could not understand."
|
||||
}
|
||||
|
||||
whereas this:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from flask import abort
|
||||
abort(400, 'My custom message')
|
||||
|
||||
will output
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"message": "My custom message"
|
||||
}
|
||||
|
||||
|
||||
The Flask-RESTX abort helper
|
||||
-------------------------------
|
||||
|
||||
The :func:`errors.abort` and the :meth:`Namespace.abort` helpers
|
||||
works like the original Flask :func:`flask.abort`
|
||||
but it will also add the keyword arguments to the response.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from flask_restx import abort
|
||||
abort(400, custom='value')
|
||||
|
||||
will return a 400 HTTP code and output
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"message": "The browser (or proxy) sent a request that this server could not understand.",
|
||||
"custom": "value"
|
||||
}
|
||||
|
||||
whereas this:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from flask import abort
|
||||
abort(400, 'My custom message', custom='value')
|
||||
|
||||
will output
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"message": "My custom message",
|
||||
"custom": "value"
|
||||
}
|
||||
|
||||
|
||||
The ``@api.errorhandler`` decorator
|
||||
-----------------------------------
|
||||
|
||||
The :meth:`@api.errorhandler <Api.errorhandler>` decorator
|
||||
allows you to register a specific handler for a given exception (or any exceptions inherited from it), in the same manner
|
||||
that you can do with Flask/Blueprint :meth:`@errorhandler <flask:flask.Flask.errorhandler>` decorator.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@api.errorhandler(RootException)
|
||||
def handle_root_exception(error):
|
||||
'''Return a custom message and 400 status code'''
|
||||
return {'message': 'What you want'}, 400
|
||||
|
||||
|
||||
@api.errorhandler(CustomException)
|
||||
def handle_custom_exception(error):
|
||||
'''Return a custom message and 400 status code'''
|
||||
return {'message': 'What you want'}, 400
|
||||
|
||||
|
||||
@api.errorhandler(AnotherException)
|
||||
def handle_another_exception(error):
|
||||
'''Return a custom message and 500 status code'''
|
||||
return {'message': error.specific}
|
||||
|
||||
|
||||
@api.errorhandler(FakeException)
|
||||
def handle_fake_exception_with_header(error):
|
||||
'''Return a custom message and 400 status code'''
|
||||
return {'message': error.message}, 400, {'My-Header': 'Value'}
|
||||
|
||||
|
||||
@api.errorhandler(NoResultFound)
|
||||
def handle_no_result_exception(error):
|
||||
'''Return a custom not found error message and 404 status code'''
|
||||
return {'message': error.specific}, 404
|
||||
|
||||
|
||||
.. note ::
|
||||
|
||||
A "NoResultFound" error with description is required by the OpenAPI 2.0 spec. The docstring in the error handle function is output in the swagger.json as the description.
|
||||
|
||||
You can also document the error:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@api.errorhandler(FakeException)
|
||||
@api.marshal_with(error_fields, code=400)
|
||||
@api.header('My-Header', 'Some description')
|
||||
def handle_fake_exception_with_header(error):
|
||||
'''This is a custom error'''
|
||||
return {'message': error.message}, 400, {'My-Header': 'Value'}
|
||||
|
||||
|
||||
@api.route('/test/')
|
||||
class TestResource(Resource):
|
||||
def get(self):
|
||||
'''
|
||||
Do something
|
||||
|
||||
:raises CustomException: In case of something
|
||||
'''
|
||||
pass
|
||||
|
||||
In this example, the ``:raise:`` docstring will be automatically extracted
|
||||
and the response 400 will be documented properly.
|
||||
|
||||
|
||||
It also allows for overriding the default error handler when used without parameter:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@api.errorhandler
|
||||
def default_error_handler(error):
|
||||
'''Default error handler'''
|
||||
return {'message': str(error)}, getattr(error, 'code', 500)
|
||||
|
||||
.. note ::
|
||||
|
||||
Flask-RESTX will return a message in the error response by default.
|
||||
If a custom response is required as an error and the message field is not needed,
|
||||
it can be disabled by setting ``ERROR_INCLUDE_MESSAGE`` to ``False`` in your application config.
|
||||
|
||||
Error handlers can also be registered on namespaces. An error handler registered on a namespace
|
||||
will override one registered on the api.
|
||||
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
ns = Namespace('cats', description='Cats related operations')
|
||||
|
||||
@ns.errorhandler
|
||||
def specific_namespace_error_handler(error):
|
||||
'''Namespace error handler'''
|
||||
return {'message': str(error)}, getattr(error, 'code', 500)
|
|
@ -0,0 +1,108 @@
|
|||
Full example
|
||||
============
|
||||
|
||||
Here is a full example of a `TodoMVC <https://todomvc.com/>`_ API.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from flask import Flask
|
||||
from flask_restx import Api, Resource, fields
|
||||
from werkzeug.middleware.proxy_fix import ProxyFix
|
||||
|
||||
app = Flask(__name__)
|
||||
app.wsgi_app = ProxyFix(app.wsgi_app)
|
||||
api = Api(app, version='1.0', title='TodoMVC API',
|
||||
description='A simple TodoMVC API',
|
||||
)
|
||||
|
||||
ns = api.namespace('todos', description='TODO operations')
|
||||
|
||||
todo = api.model('Todo', {
|
||||
'id': fields.Integer(readonly=True, description='The task unique identifier'),
|
||||
'task': fields.String(required=True, description='The task details')
|
||||
})
|
||||
|
||||
|
||||
class TodoDAO(object):
|
||||
def __init__(self):
|
||||
self.counter = 0
|
||||
self.todos = []
|
||||
|
||||
def get(self, id):
|
||||
for todo in self.todos:
|
||||
if todo['id'] == id:
|
||||
return todo
|
||||
api.abort(404, "Todo {} doesn't exist".format(id))
|
||||
|
||||
def create(self, data):
|
||||
todo = data
|
||||
todo['id'] = self.counter = self.counter + 1
|
||||
self.todos.append(todo)
|
||||
return todo
|
||||
|
||||
def update(self, id, data):
|
||||
todo = self.get(id)
|
||||
todo.update(data)
|
||||
return todo
|
||||
|
||||
def delete(self, id):
|
||||
todo = self.get(id)
|
||||
self.todos.remove(todo)
|
||||
|
||||
|
||||
DAO = TodoDAO()
|
||||
DAO.create({'task': 'Build an API'})
|
||||
DAO.create({'task': '?????'})
|
||||
DAO.create({'task': 'profit!'})
|
||||
|
||||
|
||||
@ns.route('/')
|
||||
class TodoList(Resource):
|
||||
'''Shows a list of all todos, and lets you POST to add new tasks'''
|
||||
@ns.doc('list_todos')
|
||||
@ns.marshal_list_with(todo)
|
||||
def get(self):
|
||||
'''List all tasks'''
|
||||
return DAO.todos
|
||||
|
||||
@ns.doc('create_todo')
|
||||
@ns.expect(todo)
|
||||
@ns.marshal_with(todo, code=201)
|
||||
def post(self):
|
||||
'''Create a new task'''
|
||||
return DAO.create(api.payload), 201
|
||||
|
||||
|
||||
@ns.route('/<int:id>')
|
||||
@ns.response(404, 'Todo not found')
|
||||
@ns.param('id', 'The task identifier')
|
||||
class Todo(Resource):
|
||||
'''Show a single todo item and lets you delete them'''
|
||||
@ns.doc('get_todo')
|
||||
@ns.marshal_with(todo)
|
||||
def get(self, id):
|
||||
'''Fetch a given resource'''
|
||||
return DAO.get(id)
|
||||
|
||||
@ns.doc('delete_todo')
|
||||
@ns.response(204, 'Todo deleted')
|
||||
def delete(self, id):
|
||||
'''Delete a task given its identifier'''
|
||||
DAO.delete(id)
|
||||
return '', 204
|
||||
|
||||
@ns.expect(todo)
|
||||
@ns.marshal_with(todo)
|
||||
def put(self, id):
|
||||
'''Update a task given its identifier'''
|
||||
return DAO.update(id, api.payload)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
app.run(debug=True)
|
||||
|
||||
|
||||
|
||||
You can find other examples in the `github repository examples folder`_.
|
||||
|
||||
.. _github repository examples folder: https://github.com/python-restx/flask-restx/tree/master/examples
|
|
@ -0,0 +1,103 @@
|
|||
.. Flask-RESTX documentation master file, created by
|
||||
sphinx-quickstart on Wed Aug 13 17:07:14 2014.
|
||||
You can adapt this file completely to your liking, but it should at least
|
||||
contain the root `toctree` directive.
|
||||
|
||||
Welcome to Flask-RESTX's documentation!
|
||||
=======================================
|
||||
|
||||
Flask-RESTX is an extension for Flask that adds support for quickly building REST APIs.
|
||||
Flask-RESTX encourages best practices with minimal setup.
|
||||
If you are familiar with Flask, Flask-RESTX should be easy to pick up.
|
||||
It provides a coherent collection of decorators and tools to describe your API
|
||||
and expose its documentation properly (using Swagger).
|
||||
|
||||
Flask-RESTX is a community driven fork of `Flask-RESTPlus
|
||||
<https://github.com/noirbizarre/flask-restplus>`_
|
||||
|
||||
|
||||
Why did we fork?
|
||||
================
|
||||
|
||||
The community has decided to fork the project due to lack of response from the
|
||||
original author @noirbizarre. We have been discussing this eventuality for
|
||||
`a long time <https://github.com/noirbizarre/flask-restplus/issues/593>`_.
|
||||
|
||||
Things evolved a bit since that discussion and a few of us have been granted
|
||||
maintainers access to the github project, but only the original author has
|
||||
access rights on the PyPi project. As such, we been unable to make any actual
|
||||
releases. To prevent this project from dying out, we have forked it to continue
|
||||
development and to support our users.
|
||||
|
||||
|
||||
Compatibility
|
||||
=============
|
||||
|
||||
Flask-RESTX requires Python 3.8+.
|
||||
|
||||
|
||||
Installation
|
||||
============
|
||||
|
||||
You can install Flask-RESTX with pip:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ pip install flask-restx
|
||||
|
||||
or with easy_install:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ easy_install flask-restx
|
||||
|
||||
|
||||
Documentation
|
||||
=============
|
||||
|
||||
This part of the documentation will show you how to get started in using
|
||||
Flask-RESTX with Flask.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
installation
|
||||
quickstart
|
||||
marshalling
|
||||
parsing
|
||||
errors
|
||||
mask
|
||||
swagger
|
||||
logging
|
||||
postman
|
||||
scaling
|
||||
example
|
||||
configuration
|
||||
|
||||
|
||||
API Reference
|
||||
-------------
|
||||
|
||||
If you are looking for information on a specific function, class or
|
||||
method, this part of the documentation is for you.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
api
|
||||
|
||||
Additional Notes
|
||||
----------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
contributing
|
||||
|
||||
|
||||
Indices and tables
|
||||
==================
|
||||
|
||||
* :ref:`genindex`
|
||||
* :ref:`modindex`
|
||||
* :ref:`search`
|
|
@ -0,0 +1,24 @@
|
|||
.. _installation:
|
||||
|
||||
Installation
|
||||
============
|
||||
|
||||
Install Flask-RESTX with ``pip``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
pip install flask-restx
|
||||
|
||||
|
||||
The development version can be downloaded from
|
||||
`GitHub <https://github.com/python-restx/flask-restx>`_.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
git clone https://github.com/python-restx/flask-restx.git
|
||||
cd flask-restx
|
||||
pip install -e .[dev,test]
|
||||
|
||||
|
||||
Flask-RESTX requires Python version 3.8+.
|
||||
It's also working with PyPy and PyPy3.
|
|
@ -0,0 +1,103 @@
|
|||
Logging
|
||||
===============
|
||||
|
||||
Flask-RESTX extends `Flask's logging <https://flask.palletsprojects.com/en/1.1.x/logging/>`_
|
||||
by providing each ``API`` and ``Namespace`` it's own standard Python :class:`logging.Logger` instance.
|
||||
This allows separation of logging on a per namespace basis to allow more fine-grained detail and configuration.
|
||||
|
||||
By default, these loggers inherit configuration from the Flask application object logger.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import logging
|
||||
|
||||
import flask
|
||||
|
||||
from flask_restx import Api, Resource
|
||||
|
||||
# configure root logger
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
|
||||
app = flask.Flask(__name__)
|
||||
|
||||
api = Api(app)
|
||||
|
||||
|
||||
# each of these loggers uses configuration from app.logger
|
||||
ns1 = api.namespace('api/v1', description='test')
|
||||
ns2 = api.namespace('api/v2', description='test')
|
||||
|
||||
|
||||
@ns1.route('/my-resource')
|
||||
class MyResource(Resource):
|
||||
def get(self):
|
||||
# will log
|
||||
ns1.logger.info("hello from ns1")
|
||||
return {"message": "hello"}
|
||||
|
||||
|
||||
@ns2.route('/my-resource')
|
||||
class MyNewResource(Resource):
|
||||
def get(self):
|
||||
# won't log due to INFO log level from app.logger
|
||||
ns2.logger.debug("hello from ns2")
|
||||
return {"message": "hello"}
|
||||
|
||||
|
||||
Loggers can be configured individually to override the configuration from the Flask
|
||||
application object logger. In the above example, ``ns2`` log level can be set to
|
||||
``DEBUG`` individually:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# ns1 will have log level INFO from app.logger
|
||||
ns1 = api.namespace('api/v1', description='test')
|
||||
|
||||
# ns2 will have log level DEBUG
|
||||
ns2 = api.namespace('api/v2', description='test')
|
||||
ns2.logger.setLevel(logging.DEBUG)
|
||||
|
||||
|
||||
@ns1.route('/my-resource')
|
||||
class MyResource(Resource):
|
||||
def get(self):
|
||||
# will log
|
||||
ns1.logger.info("hello from ns1")
|
||||
return {"message": "hello"}
|
||||
|
||||
|
||||
@ns2.route('/my-resource')
|
||||
class MyNewResource(Resource):
|
||||
def get(self):
|
||||
# will log
|
||||
ns2.logger.debug("hello from ns2")
|
||||
return {"message": "hello"}
|
||||
|
||||
|
||||
Adding additional handlers:
|
||||
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# configure a file handler for ns1 only
|
||||
ns1 = api.namespace('api/v1')
|
||||
fh = logging.FileHandler("v1.log")
|
||||
ns1.logger.addHandler(fh)
|
||||
|
||||
ns2 = api.namespace('api/v2')
|
||||
|
||||
|
||||
@ns1.route('/my-resource')
|
||||
class MyResource(Resource):
|
||||
def get(self):
|
||||
# will log to *both* v1.log file and app.logger handlers
|
||||
ns1.logger.info("hello from ns1")
|
||||
return {"message": "hello"}
|
||||
|
||||
|
||||
@ns2.route('/my-resource')
|
||||
class MyNewResource(Resource):
|
||||
def get(self):
|
||||
# will log to *only* app.logger handlers
|
||||
ns2.logger.info("hello from ns2")
|
||||
return {"message": "hello"}
|
|
@ -0,0 +1,242 @@
|
|||
@ECHO OFF
|
||||
|
||||
REM Command file for Sphinx documentation
|
||||
|
||||
if "%SPHINXBUILD%" == "" (
|
||||
set SPHINXBUILD=sphinx-build
|
||||
)
|
||||
set BUILDDIR=_build
|
||||
set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
|
||||
set I18NSPHINXOPTS=%SPHINXOPTS% .
|
||||
if NOT "%PAPER%" == "" (
|
||||
set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
|
||||
set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
|
||||
)
|
||||
|
||||
if "%1" == "" goto help
|
||||
|
||||
if "%1" == "help" (
|
||||
:help
|
||||
echo.Please use `make ^<target^>` where ^<target^> is one of
|
||||
echo. html to make standalone HTML files
|
||||
echo. dirhtml to make HTML files named index.html in directories
|
||||
echo. singlehtml to make a single large HTML file
|
||||
echo. pickle to make pickle files
|
||||
echo. json to make JSON files
|
||||
echo. htmlhelp to make HTML files and a HTML help project
|
||||
echo. qthelp to make HTML files and a qthelp project
|
||||
echo. devhelp to make HTML files and a Devhelp project
|
||||
echo. epub to make an epub
|
||||
echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
|
||||
echo. text to make text files
|
||||
echo. man to make manual pages
|
||||
echo. texinfo to make Texinfo files
|
||||
echo. gettext to make PO message catalogs
|
||||
echo. changes to make an overview over all changed/added/deprecated items
|
||||
echo. xml to make Docutils-native XML files
|
||||
echo. pseudoxml to make pseudoxml-XML files for display purposes
|
||||
echo. linkcheck to check all external links for integrity
|
||||
echo. doctest to run all doctests embedded in the documentation if enabled
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "clean" (
|
||||
for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
|
||||
del /q /s %BUILDDIR%\*
|
||||
goto end
|
||||
)
|
||||
|
||||
|
||||
%SPHINXBUILD% 2> nul
|
||||
if errorlevel 9009 (
|
||||
echo.
|
||||
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
|
||||
echo.installed, then set the SPHINXBUILD environment variable to point
|
||||
echo.to the full path of the 'sphinx-build' executable. Alternatively you
|
||||
echo.may add the Sphinx directory to PATH.
|
||||
echo.
|
||||
echo.If you don't have Sphinx installed, grab it from
|
||||
echo.https://sphinx-doc.org/
|
||||
exit /b 1
|
||||
)
|
||||
|
||||
if "%1" == "html" (
|
||||
%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The HTML pages are in %BUILDDIR%/html.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "dirhtml" (
|
||||
%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "singlehtml" (
|
||||
%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "pickle" (
|
||||
%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished; now you can process the pickle files.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "json" (
|
||||
%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished; now you can process the JSON files.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "htmlhelp" (
|
||||
%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished; now you can run HTML Help Workshop with the ^
|
||||
.hhp project file in %BUILDDIR%/htmlhelp.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "qthelp" (
|
||||
%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished; now you can run "qcollectiongenerator" with the ^
|
||||
.qhcp project file in %BUILDDIR%/qthelp, like this:
|
||||
echo.^> qcollectiongenerator %BUILDDIR%\qthelp\Flask-RESTX.qhcp
|
||||
echo.To view the help file:
|
||||
echo.^> assistant -collectionFile %BUILDDIR%\qthelp\Flask-RESTX.ghc
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "devhelp" (
|
||||
%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "epub" (
|
||||
%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The epub file is in %BUILDDIR%/epub.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "latex" (
|
||||
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "latexpdf" (
|
||||
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
|
||||
cd %BUILDDIR%/latex
|
||||
make all-pdf
|
||||
cd %BUILDDIR%/..
|
||||
echo.
|
||||
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "latexpdfja" (
|
||||
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
|
||||
cd %BUILDDIR%/latex
|
||||
make all-pdf-ja
|
||||
cd %BUILDDIR%/..
|
||||
echo.
|
||||
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "text" (
|
||||
%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The text files are in %BUILDDIR%/text.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "man" (
|
||||
%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The manual pages are in %BUILDDIR%/man.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "texinfo" (
|
||||
%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "gettext" (
|
||||
%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "changes" (
|
||||
%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.The overview file is in %BUILDDIR%/changes.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "linkcheck" (
|
||||
%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Link check complete; look for any errors in the above output ^
|
||||
or in %BUILDDIR%/linkcheck/output.txt.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "doctest" (
|
||||
%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Testing of doctests in the sources finished, look at the ^
|
||||
results in %BUILDDIR%/doctest/output.txt.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "xml" (
|
||||
%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The XML files are in %BUILDDIR%/xml.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "pseudoxml" (
|
||||
%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.
|
||||
goto end
|
||||
)
|
||||
|
||||
:end
|
|
@ -0,0 +1,542 @@
|
|||
.. _fields:
|
||||
|
||||
Response marshalling
|
||||
====================
|
||||
|
||||
.. currentmodule:: flask_restx
|
||||
|
||||
|
||||
Flask-RESTX provides an easy way to control what data you actually render in
|
||||
your response or expect as an input payload.
|
||||
With the :mod:`~.fields` module, you can use whatever objects (ORM
|
||||
models/custom classes/etc.) you want in your resource.
|
||||
:mod:`~.fields` also lets you format and filter the response
|
||||
so you don't have to worry about exposing internal data structures.
|
||||
|
||||
It's also very clear when looking at your code what data will be rendered and
|
||||
how it will be formatted.
|
||||
|
||||
|
||||
Basic Usage
|
||||
-----------
|
||||
You can define a dict or OrderedDict of fields whose keys are names of attributes or keys on the object to render,
|
||||
and whose values are a class that will format & return the value for that field.
|
||||
This example has three fields:
|
||||
two are :class:`~fields.String` and one is a :class:`~fields.DateTime`,
|
||||
formatted as an ISO 8601 datetime string (RFC 822 is supported as well):
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from flask_restx import Resource, fields
|
||||
|
||||
model = api.model('Model', {
|
||||
'name': fields.String,
|
||||
'address': fields.String,
|
||||
'date_updated': fields.DateTime(dt_format='rfc822'),
|
||||
})
|
||||
|
||||
@api.route('/todo')
|
||||
class Todo(Resource):
|
||||
@api.marshal_with(model, envelope='resource')
|
||||
def get(self, **kwargs):
|
||||
return db_get_todo() # Some function that queries the db
|
||||
|
||||
|
||||
This example assumes that you have a custom database object (``todo``) that
|
||||
has attributes ``name``, ``address``, and ``date_updated``.
|
||||
Any additional attributes on the object are considered private and won't be rendered in the output.
|
||||
An optional ``envelope`` keyword argument is specified to wrap the resulting output.
|
||||
|
||||
The decorator :meth:`~Api.marshal_with` is what actually takes your data object and applies the field filtering.
|
||||
The marshalling can work on single objects, dicts, or lists of objects.
|
||||
|
||||
.. note ::
|
||||
|
||||
:func:`marshal_with` is a convenience decorator, that is functionally
|
||||
equivalent to:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
class Todo(Resource):
|
||||
def get(self, **kwargs):
|
||||
return marshal(db_get_todo(), model), 200
|
||||
|
||||
The :meth:`@api.marshal_with <Api.marshal_with>` decorator add the swagger documentation ability.
|
||||
|
||||
This explicit expression can be used to return HTTP status codes other than 200
|
||||
along with a successful response (see :func:`~errors.abort` for errors).
|
||||
|
||||
|
||||
Renaming Attributes
|
||||
-------------------
|
||||
|
||||
Often times your public facing field name is different from your internal field name.
|
||||
To configure this mapping, use the ``attribute`` keyword argument. ::
|
||||
|
||||
model = {
|
||||
'name': fields.String(attribute='private_name'),
|
||||
'address': fields.String,
|
||||
}
|
||||
|
||||
A lambda (or any callable) can also be specified as the ``attribute`` ::
|
||||
|
||||
model = {
|
||||
'name': fields.String(attribute=lambda x: x._private_name),
|
||||
'address': fields.String,
|
||||
}
|
||||
|
||||
Nested properties can also be accessed with ``attribute``::
|
||||
|
||||
model = {
|
||||
'name': fields.String(attribute='people_list.0.person_dictionary.name'),
|
||||
'address': fields.String,
|
||||
}
|
||||
|
||||
|
||||
Default Values
|
||||
--------------
|
||||
|
||||
If for some reason your data object doesn't have an attribute in your fields list,
|
||||
you can specify a default value to return instead of :obj:`None`.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
model = {
|
||||
'name': fields.String(default='Anonymous User'),
|
||||
'address': fields.String,
|
||||
}
|
||||
|
||||
|
||||
Custom Fields & Multiple Values
|
||||
-------------------------------
|
||||
|
||||
Sometimes you have your own custom formatting needs.
|
||||
You can subclass the :class:`fields.Raw` class and implement the format function.
|
||||
This is especially useful when an attribute stores multiple pieces of information.
|
||||
e.g. a bit-field whose individual bits represent distinct values.
|
||||
You can use fields to multiplex a single attribute to multiple output values.
|
||||
|
||||
|
||||
This example assumes that bit 1 in the ``flags`` attribute signifies a
|
||||
"Normal" or "Urgent" item, and bit 2 signifies "Read" or "Unread".
|
||||
These items might be easy to store in a bitfield,
|
||||
but for a human readable output it's nice to convert them to separate string fields.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
class UrgentItem(fields.Raw):
|
||||
def format(self, value):
|
||||
return "Urgent" if value & 0x01 else "Normal"
|
||||
|
||||
class UnreadItem(fields.Raw):
|
||||
def format(self, value):
|
||||
return "Unread" if value & 0x02 else "Read"
|
||||
|
||||
model = {
|
||||
'name': fields.String,
|
||||
'priority': UrgentItem(attribute='flags'),
|
||||
'status': UnreadItem(attribute='flags'),
|
||||
}
|
||||
|
||||
|
||||
Url & Other Concrete Fields
|
||||
---------------------------
|
||||
|
||||
Flask-RESTX includes a special field, :class:`fields.Url`,
|
||||
that synthesizes a uri for the resource that's being requested.
|
||||
This is also a good example of how to add data to your response that's not actually present on your data object.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
class RandomNumber(fields.Raw):
|
||||
def output(self, key, obj):
|
||||
return random.random()
|
||||
|
||||
model = {
|
||||
'name': fields.String,
|
||||
# todo_resource is the endpoint name when you called api.route()
|
||||
'uri': fields.Url('todo_resource'),
|
||||
'random': RandomNumber,
|
||||
}
|
||||
|
||||
|
||||
By default :class:`fields.Url` returns a relative uri.
|
||||
To generate an absolute uri that includes the scheme, hostname and port,
|
||||
pass the keyword argument ``absolute=True`` in the field declaration.
|
||||
To override the default scheme, pass the ``scheme`` keyword argument:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
model = {
|
||||
'uri': fields.Url('todo_resource', absolute=True),
|
||||
'https_uri': fields.Url('todo_resource', absolute=True, scheme='https')
|
||||
}
|
||||
|
||||
|
||||
Complex Structures
|
||||
------------------
|
||||
|
||||
You can have a flat structure that :func:`marshal` will transform to a nested structure:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
>>> from flask_restx import fields, marshal
|
||||
>>> import json
|
||||
>>>
|
||||
>>> resource_fields = {'name': fields.String}
|
||||
>>> resource_fields['address'] = {}
|
||||
>>> resource_fields['address']['line 1'] = fields.String(attribute='addr1')
|
||||
>>> resource_fields['address']['line 2'] = fields.String(attribute='addr2')
|
||||
>>> resource_fields['address']['city'] = fields.String
|
||||
>>> resource_fields['address']['state'] = fields.String
|
||||
>>> resource_fields['address']['zip'] = fields.String
|
||||
>>> data = {'name': 'bob', 'addr1': '123 fake street', 'addr2': '', 'city': 'New York', 'state': 'NY', 'zip': '10468'}
|
||||
>>> json.dumps(marshal(data, resource_fields))
|
||||
'{"name": "bob", "address": {"line 1": "123 fake street", "line 2": "", "state": "NY", "zip": "10468", "city": "New York"}}'
|
||||
|
||||
.. note ::
|
||||
The address field doesn't actually exist on the data object,
|
||||
but any of the sub-fields can access attributes directly from the object
|
||||
as if they were not nested.
|
||||
|
||||
.. _list-field:
|
||||
|
||||
List Field
|
||||
----------
|
||||
|
||||
You can also unmarshal fields as lists ::
|
||||
|
||||
>>> from flask_restx import fields, marshal
|
||||
>>> import json
|
||||
>>>
|
||||
>>> resource_fields = {'name': fields.String, 'first_names': fields.List(fields.String)}
|
||||
>>> data = {'name': 'Bougnazal', 'first_names' : ['Emile', 'Raoul']}
|
||||
>>> json.dumps(marshal(data, resource_fields))
|
||||
>>> '{"first_names": ["Emile", "Raoul"], "name": "Bougnazal"}'
|
||||
|
||||
.. _wildcard-field:
|
||||
|
||||
Wildcard Field
|
||||
--------------
|
||||
|
||||
If you don't know the name(s) of the field(s) you want to unmarshall, you can
|
||||
use :class:`~fields.Wildcard` ::
|
||||
|
||||
>>> from flask_restx import fields, marshal
|
||||
>>> import json
|
||||
>>>
|
||||
>>> wild = fields.Wildcard(fields.String)
|
||||
>>> wildcard_fields = {'*': wild}
|
||||
>>> data = {'John': 12, 'bob': 42, 'Jane': '68'}
|
||||
>>> json.dumps(marshal(data, wildcard_fields))
|
||||
>>> '{"Jane": "68", "bob": "42", "John": "12"}'
|
||||
|
||||
The name you give to your :class:`~fields.Wildcard` acts as a real glob as
|
||||
shown below ::
|
||||
|
||||
>>> from flask_restx import fields, marshal
|
||||
>>> import json
|
||||
>>>
|
||||
>>> wild = fields.Wildcard(fields.String)
|
||||
>>> wildcard_fields = {'j*': wild}
|
||||
>>> data = {'John': 12, 'bob': 42, 'Jane': '68'}
|
||||
>>> json.dumps(marshal(data, wildcard_fields))
|
||||
>>> '{"Jane": "68", "John": "12"}'
|
||||
|
||||
.. note ::
|
||||
It is important you define your :class:`~fields.Wildcard` **outside** your
|
||||
model (ie. you **cannot** use it like this:
|
||||
``res_fields = {'*': fields.Wildcard(fields.String)}``) because it has to be
|
||||
stateful to keep a track of what fields it has already treated.
|
||||
|
||||
.. note ::
|
||||
The glob is not a regex, it can only treat simple wildcards like '*' or '?'.
|
||||
|
||||
In order to avoid unexpected behavior, when mixing :class:`~fields.Wildcard`
|
||||
with other fields, you may want to use an ``OrderedDict`` and use the
|
||||
:class:`~fields.Wildcard` as the last field ::
|
||||
|
||||
>>> from flask_restx import fields, marshal
|
||||
>>> import json
|
||||
>>>
|
||||
>>> wild = fields.Wildcard(fields.Integer)
|
||||
>>> # you can use it in api.model like this:
|
||||
>>> # some_fields = api.model('MyModel', {'zoro': fields.String, '*': wild})
|
||||
>>>
|
||||
>>> data = {'John': 12, 'bob': 42, 'Jane': '68', 'zoro': 72}
|
||||
>>> json.dumps(marshal(data, mod))
|
||||
>>> '{"zoro": "72", "Jane": 68, "bob": 42, "John": 12}'
|
||||
|
||||
.. _nested-field:
|
||||
|
||||
Nested Field
|
||||
------------
|
||||
|
||||
While nesting fields using dicts can turn a flat data object into a nested
|
||||
response, you can use :class:`~fields.Nested` to unmarshal nested data
|
||||
structures and render them appropriately. ::
|
||||
|
||||
>>> from flask_restx import fields, marshal
|
||||
>>> import json
|
||||
>>>
|
||||
>>> address_fields = {}
|
||||
>>> address_fields['line 1'] = fields.String(attribute='addr1')
|
||||
>>> address_fields['line 2'] = fields.String(attribute='addr2')
|
||||
>>> address_fields['city'] = fields.String(attribute='city')
|
||||
>>> address_fields['state'] = fields.String(attribute='state')
|
||||
>>> address_fields['zip'] = fields.String(attribute='zip')
|
||||
>>>
|
||||
>>> resource_fields = {}
|
||||
>>> resource_fields['name'] = fields.String
|
||||
>>> resource_fields['billing_address'] = fields.Nested(address_fields)
|
||||
>>> resource_fields['shipping_address'] = fields.Nested(address_fields)
|
||||
>>> address1 = {'addr1': '123 fake street', 'city': 'New York', 'state': 'NY', 'zip': '10468'}
|
||||
>>> address2 = {'addr1': '555 nowhere', 'city': 'New York', 'state': 'NY', 'zip': '10468'}
|
||||
>>> data = {'name': 'bob', 'billing_address': address1, 'shipping_address': address2}
|
||||
>>>
|
||||
>>> json.dumps(marshal(data, resource_fields))
|
||||
'{"billing_address": {"line 1": "123 fake street", "line 2": null, "state": "NY", "zip": "10468", "city": "New York"}, "name": "bob", "shipping_address": {"line 1": "555 nowhere", "line 2": null, "state": "NY", "zip": "10468", "city": "New York"}}'
|
||||
|
||||
This example uses two :class:`~fields.Nested` fields.
|
||||
The :class:`~fields.Nested` constructor takes a dict of fields to render as sub-fields.input.
|
||||
The important difference between the :class:`~fields.Nested` constructor and nested dicts (previous example),
|
||||
is the context for attributes.
|
||||
In this example,
|
||||
``billing_address`` is a complex object that has its own fields and
|
||||
the context passed to the nested field is the sub-object instead of the original ``data`` object.
|
||||
In other words:
|
||||
``data.billing_address.addr1`` is in scope here,
|
||||
whereas in the previous example ``data.addr1`` was the location attribute.
|
||||
Remember: :class:`~fields.Nested` and :class:`~fields.List` objects create a new scope for attributes.
|
||||
|
||||
By default when the sub-object is `None`, an object with default values for the nested fields will be generated instead of `null`. This can be modified by passing the `allow_null` parameter, see the :class:`~fields.Nested` constructor for more details.
|
||||
|
||||
Use :class:`~fields.Nested` with :class:`~fields.List` to marshal lists of more complex objects:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
user_fields = api.model('User', {
|
||||
'id': fields.Integer,
|
||||
'name': fields.String,
|
||||
})
|
||||
|
||||
user_list_fields = api.model('UserList', {
|
||||
'users': fields.List(fields.Nested(user_fields)),
|
||||
})
|
||||
|
||||
|
||||
The ``api.model()`` factory
|
||||
----------------------------
|
||||
|
||||
The :meth:`~Namespace.model` factory allows you to instantiate
|
||||
and register models to your :class:`API` or :class:`Namespace`.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
my_fields = api.model('MyModel', {
|
||||
'name': fields.String,
|
||||
'age': fields.Integer(min=0)
|
||||
})
|
||||
|
||||
# Equivalent to
|
||||
my_fields = Model('MyModel', {
|
||||
'name': fields.String,
|
||||
'age': fields.Integer(min=0)
|
||||
})
|
||||
api.models[my_fields.name] = my_fields
|
||||
|
||||
|
||||
Duplicating with ``clone``
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The :meth:`Model.clone` method allows you to instantiate an augmented model.
|
||||
It saves you duplicating all fields.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
parent = Model('Parent', {
|
||||
'name': fields.String
|
||||
})
|
||||
|
||||
child = parent.clone('Child', {
|
||||
'age': fields.Integer
|
||||
})
|
||||
|
||||
The :meth:`Api/Namespace.clone <~Namespace.clone>` also register it on the API.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
parent = api.model('Parent', {
|
||||
'name': fields.String
|
||||
})
|
||||
|
||||
child = api.clone('Child', parent, {
|
||||
'age': fields.Integer
|
||||
})
|
||||
|
||||
|
||||
Polymorphism with ``api.inherit``
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The :meth:`Model.inherit` method allows to extend a model in the "Swagger way"
|
||||
and to start handling polymorphism.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
parent = api.model('Parent', {
|
||||
'name': fields.String,
|
||||
'class': fields.String(discriminator=True)
|
||||
})
|
||||
|
||||
child = api.inherit('Child', parent, {
|
||||
'extra': fields.String
|
||||
})
|
||||
|
||||
The :meth:`Api/Namespace.clone <~Namespace.clone>` will register both the parent and the child
|
||||
in the Swagger models definitions.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
parent = Model('Parent', {
|
||||
'name': fields.String,
|
||||
'class': fields.String(discriminator=True)
|
||||
})
|
||||
|
||||
child = parent.inherit('Child', {
|
||||
'extra': fields.String
|
||||
})
|
||||
|
||||
|
||||
The ``class`` field in this example will be populated with the serialized model name
|
||||
only if the property does not exists in the serialized object.
|
||||
|
||||
The :class:`~fields.Polymorph` field allows you to specify a mapping between Python classes
|
||||
and fields specifications.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
mapping = {
|
||||
Child1: child1_fields,
|
||||
Child2: child2_fields,
|
||||
}
|
||||
|
||||
fields = api.model('Thing', {
|
||||
owner: fields.Polymorph(mapping)
|
||||
})
|
||||
|
||||
|
||||
Custom fields
|
||||
-------------
|
||||
|
||||
Custom output fields let you perform your own output formatting without having
|
||||
to modify your internal objects directly.
|
||||
All you have to do is subclass :class:`~fields.Raw` and implement the :meth:`~fields.Raw.format` method:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
class AllCapsString(fields.Raw):
|
||||
def format(self, value):
|
||||
return value.upper()
|
||||
|
||||
|
||||
# example usage
|
||||
fields = {
|
||||
'name': fields.String,
|
||||
'all_caps_name': AllCapsString(attribute='name'),
|
||||
}
|
||||
|
||||
You can also use the :attr:`__schema_format__`, ``__schema_type__`` and
|
||||
``__schema_example__`` to specify the produced types and examples:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
class MyIntField(fields.Integer):
|
||||
__schema_format__ = 'int64'
|
||||
|
||||
class MySpecialField(fields.Raw):
|
||||
__schema_type__ = 'some-type'
|
||||
__schema_format__ = 'some-format'
|
||||
|
||||
class MyVerySpecialField(fields.Raw):
|
||||
__schema_example__ = 'hello, world'
|
||||
|
||||
|
||||
Skip fields which value is None
|
||||
-------------------------------
|
||||
|
||||
You can skip those fields which values is ``None`` instead of marshaling those fields with JSON value, null.
|
||||
This feature is useful to reduce the size of response when you have a lots of fields which value may be None,
|
||||
but which fields are ``None`` are unpredictable.
|
||||
|
||||
Let consider the following example with an optional ``skip_none`` keyword argument be set to True.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
>>> from flask_restx import Model, fields, marshal_with
|
||||
>>> import json
|
||||
>>> model = Model('Model', {
|
||||
... 'name': fields.String,
|
||||
... 'address_1': fields.String,
|
||||
... 'address_2': fields.String
|
||||
... })
|
||||
>>> @marshal_with(model, skip_none=True)
|
||||
... def get():
|
||||
... return {'name': 'John', 'address_1': None}
|
||||
...
|
||||
>>> get()
|
||||
OrderedDict([('name', 'John')])
|
||||
|
||||
You can see that ``address_1`` and ``address_2`` are skipped by :func:`marshal_with`.
|
||||
``address_1`` be skipped because value is ``None``.
|
||||
``address_2`` be skipped because the dictionary return by ``get()`` have no key, ``address_2``.
|
||||
|
||||
Skip none in Nested fields
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If your module use :class:`fields.Nested`, you need to pass ``skip_none=True`` keyword argument to :class:`fields.Nested`.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
>>> from flask_restx import Model, fields, marshal_with
|
||||
>>> import json
|
||||
>>> model = Model('Model', {
|
||||
... 'name': fields.String,
|
||||
... 'location': fields.Nested(location_model, skip_none=True)
|
||||
... })
|
||||
|
||||
|
||||
Define model using JSON Schema
|
||||
------------------------------
|
||||
|
||||
You can define models using `JSON Schema <http://json-schema.org/examples.html>`_ (Draft v4).
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
address = api.schema_model('Address', {
|
||||
'properties': {
|
||||
'road': {
|
||||
'type': 'string'
|
||||
},
|
||||
},
|
||||
'type': 'object'
|
||||
})
|
||||
|
||||
person = api.schema_model('Person', {
|
||||
'required': ['address'],
|
||||
'properties': {
|
||||
'name': {
|
||||
'type': 'string'
|
||||
},
|
||||
'age': {
|
||||
'type': 'integer'
|
||||
},
|
||||
'birthdate': {
|
||||
'type': 'string',
|
||||
'format': 'date-time'
|
||||
},
|
||||
'address': {
|
||||
'$ref': '#/definitions/Address',
|
||||
}
|
||||
},
|
||||
'type': 'object'
|
||||
})
|
|
@ -0,0 +1,106 @@
|
|||
Fields masks
|
||||
============
|
||||
|
||||
Flask-RESTX support partial object fetching (aka. fields mask)
|
||||
by supplying a custom header in the request.
|
||||
|
||||
By default the header is ``X-Fields``
|
||||
but it can be changed with the ``RESTX_MASK_HEADER`` parameter.
|
||||
|
||||
Syntax
|
||||
------
|
||||
|
||||
The syntax is actually quite simple.
|
||||
You just provide a coma separated list of field names,
|
||||
optionally wrapped in brackets.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# These two mask are equivalents
|
||||
mask = '{name,age}'
|
||||
# or
|
||||
mask = 'name,age'
|
||||
data = requests.get('/some/url/', headers={'X-Fields': mask})
|
||||
assert len(data) == 2
|
||||
assert 'name' in data
|
||||
assert 'age' in data
|
||||
|
||||
To specify a nested fields mask,
|
||||
simply provide it in bracket following the field name:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
mask = '{name, age, pet{name}}'
|
||||
|
||||
Nesting specification works with nested object or list of objects:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# Will apply the mask {name} to each pet
|
||||
# in the pets list.
|
||||
mask = '{name, age, pets{name}}'
|
||||
|
||||
There is a special star token meaning "all remaining fields".
|
||||
It allows to only specify nested filtering:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# Will apply the mask {name} to each pet
|
||||
# in the pets list and take all other root fields
|
||||
# without filtering.
|
||||
mask = '{pets{name},*}'
|
||||
|
||||
# Will not filter anything
|
||||
mask = '*'
|
||||
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
By default, each time you use ``api.marshal`` or ``@api.marshal_with``,
|
||||
the mask will be automatically applied if the header is present.
|
||||
|
||||
The header will be exposed as a Swagger parameter each time you use the
|
||||
``@api.marshal_with`` decorator.
|
||||
|
||||
As Swagger does not permit exposing a global header once
|
||||
it can make your Swagger specifications a lot more verbose.
|
||||
You can disable this behavior by setting ``RESTX_MASK_SWAGGER`` to ``False``.
|
||||
|
||||
You can also specify a default mask that will be applied if no header mask is found.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
class MyResource(Resource):
|
||||
@api.marshal_with(my_model, mask='name,age')
|
||||
def get(self):
|
||||
pass
|
||||
|
||||
|
||||
Default mask can also be handled at model level:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
model = api.model('Person', {
|
||||
'name': fields.String,
|
||||
'age': fields.Integer,
|
||||
'boolean': fields.Boolean,
|
||||
}, mask='{name,age}')
|
||||
|
||||
|
||||
It will be exposed into the model `x-mask` vendor field:
|
||||
|
||||
.. code-block:: JSON
|
||||
|
||||
{"definitions": {
|
||||
"Test": {
|
||||
"properties": {
|
||||
"age": {"type": "integer"},
|
||||
"boolean": {"type": "boolean"},
|
||||
"name": {"type": "string"}
|
||||
},
|
||||
"x-mask": "{name,age}"
|
||||
}
|
||||
}}
|
||||
|
||||
To override default masks, you need to give another mask or pass `*` as mask.
|
|
@ -0,0 +1,340 @@
|
|||
.. _parsing:
|
||||
|
||||
Request Parsing
|
||||
===============
|
||||
|
||||
.. warning ::
|
||||
|
||||
The whole request parser part of Flask-RESTX is slated for removal and
|
||||
will be replaced by documentation on how to integrate with other packages
|
||||
that do the input/output stuff better
|
||||
(such as `marshmallow <https://marshmallow.readthedocs.io/>`_).
|
||||
This means that it will be maintained until 2.0 but consider it deprecated.
|
||||
Don't worry, if you have code using that now and wish to continue doing so,
|
||||
it's not going to go away any time too soon.
|
||||
|
||||
.. currentmodule:: flask_restx
|
||||
|
||||
Flask-RESTX's request parsing interface, :mod:`reqparse`,
|
||||
is modeled after the :mod:`python:argparse` interface.
|
||||
It's designed to provide simple and uniform access to any variable on the
|
||||
:class:`flask.request` object in Flask.
|
||||
|
||||
Basic Arguments
|
||||
---------------
|
||||
|
||||
Here's a simple example of the request parser.
|
||||
It looks for two arguments in the :attr:`flask.Request.values` dict: an integer and a string
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from flask_restx import reqparse
|
||||
|
||||
parser = reqparse.RequestParser()
|
||||
parser.add_argument('rate', type=int, help='Rate cannot be converted')
|
||||
parser.add_argument('name')
|
||||
args = parser.parse_args()
|
||||
|
||||
.. note ::
|
||||
|
||||
The default argument type is a unicode string.
|
||||
This will be ``str``.
|
||||
|
||||
If you specify the ``help`` value,
|
||||
it will be rendered as the error message when a type error is raised while parsing it.
|
||||
If you do not specify a help message,
|
||||
the default behavior is to return the message from the type error itself.
|
||||
See :ref:`error-messages` for more details.
|
||||
|
||||
|
||||
.. note ::
|
||||
By default, arguments are **not** required.
|
||||
Also, arguments supplied in the request that are not part of the :class:`~reqparse.RequestParser` will be ignored.
|
||||
|
||||
.. note ::
|
||||
Arguments declared in your request parser but not set in the request itself will default to ``None``.
|
||||
|
||||
Required Arguments
|
||||
------------------
|
||||
|
||||
To require a value be passed for an argument,
|
||||
just add ``required=True`` to the call to :meth:`~reqparse.RequestParser.add_argument`.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
parser.add_argument('name', required=True, help="Name cannot be blank!")
|
||||
|
||||
|
||||
Multiple Values & Lists
|
||||
-----------------------
|
||||
|
||||
If you want to accept multiple values for a key as a list, you can pass ``action='append'``:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
parser.add_argument('name', action='append')
|
||||
|
||||
This will let you make queries like ::
|
||||
|
||||
curl http://api.example.com -d "name=bob" -d "name=sue" -d "name=joe"
|
||||
|
||||
And your args will look like this :
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
args = parser.parse_args()
|
||||
args['name'] # ['bob', 'sue', 'joe']
|
||||
|
||||
If you expect a comma-separated list, use the ``action='split'``:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
parser.add_argument('fruits', action='split')
|
||||
|
||||
This will let you make queries like ::
|
||||
|
||||
curl http://api.example.com -d "fruits=apple,lemon,cherry"
|
||||
|
||||
And your args will look like this :
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
args = parser.parse_args()
|
||||
args['fruits'] # ['apple', 'lemon', 'cherry']
|
||||
|
||||
Other Destinations
|
||||
------------------
|
||||
|
||||
If for some reason you'd like your argument stored under a different name once
|
||||
it's parsed, you can use the ``dest`` keyword argument. ::
|
||||
|
||||
parser.add_argument('name', dest='public_name')
|
||||
|
||||
args = parser.parse_args()
|
||||
args['public_name']
|
||||
|
||||
Argument Locations
|
||||
------------------
|
||||
|
||||
By default, the :class:`~reqparse.RequestParser` tries to parse values from
|
||||
:attr:`flask.Request.values`, and :attr:`flask.Request.json`.
|
||||
|
||||
Use the ``location`` argument to :meth:`~reqparse.RequestParser.add_argument`
|
||||
to specify alternate locations to pull the values from. Any variable on the
|
||||
:class:`flask.Request` can be used. For example: ::
|
||||
|
||||
# Look only in the POST body
|
||||
parser.add_argument('name', type=int, location='form')
|
||||
|
||||
# Look only in the querystring
|
||||
parser.add_argument('PageSize', type=int, location='args')
|
||||
|
||||
# From the request headers
|
||||
parser.add_argument('User-Agent', location='headers')
|
||||
|
||||
# From http cookies
|
||||
parser.add_argument('session_id', location='cookies')
|
||||
|
||||
# From file uploads
|
||||
parser.add_argument('picture', type=werkzeug.datastructures.FileStorage, location='files')
|
||||
|
||||
.. note ::
|
||||
|
||||
Only use ``type=list`` when ``location='json'``. `See this issue for more
|
||||
details <https://github.com/flask-restful/flask-restful/issues/380>`_
|
||||
|
||||
.. note ::
|
||||
|
||||
Using ``location='form'`` is way to both validate form data and document your form fields.
|
||||
|
||||
|
||||
Multiple Locations
|
||||
------------------
|
||||
|
||||
Multiple argument locations can be specified by passing a list to ``location``::
|
||||
|
||||
parser.add_argument('text', location=['headers', 'values'])
|
||||
|
||||
|
||||
When multiple locations are specified, the arguments from all locations
|
||||
specified are combined into a single :class:`~werkzeug.datastructures.MultiDict`.
|
||||
The last ``location`` listed takes precedence in the result set.
|
||||
|
||||
If the argument location list includes the :attr:`~flask.Request.headers`
|
||||
location the argument names will no longer be case insensitive and must match
|
||||
their title case names (see :meth:`str.title`). Specifying
|
||||
``location='headers'`` (not as a list) will retain case insensitivity.
|
||||
|
||||
Advanced types handling
|
||||
-----------------------
|
||||
|
||||
Sometimes, you need more than a primitive type to handle input validation.
|
||||
The :mod:`~flask_restx.inputs` module provides some common type handling like:
|
||||
|
||||
- :func:`~inputs.boolean` for wider boolean handling
|
||||
- :func:`~inputs.ipv4` and :func:`~inputs.ipv6` for IP adresses
|
||||
- :func:`~inputs.date_from_iso8601` and :func:`~inputs.datetime_from_iso8601` for ISO8601 date and datetime handling
|
||||
|
||||
You just have to use them as `type` argument:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
parser.add_argument('flag', type=inputs.boolean)
|
||||
|
||||
See the :mod:`~flask_restx.inputs` documentation for full list of available inputs.
|
||||
|
||||
You can also write your own:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
def my_type(value):
|
||||
'''Parse my type'''
|
||||
if not condition:
|
||||
raise ValueError('This is not my type')
|
||||
return parse(value)
|
||||
|
||||
# Swagger documentation
|
||||
my_type.__schema__ = {'type': 'string', 'format': 'my-custom-format'}
|
||||
|
||||
|
||||
Parser Inheritance
|
||||
------------------
|
||||
|
||||
Often you will make a different parser for each resource you write. The problem
|
||||
with this is if parsers have arguments in common. Instead of rewriting
|
||||
arguments you can write a parent parser containing all the shared arguments and
|
||||
then extend the parser with :meth:`~reqparse.RequestParser.copy`. You can
|
||||
also overwrite any argument in the parent with
|
||||
:meth:`~reqparse.RequestParser.replace_argument`, or remove it completely
|
||||
with :meth:`~reqparse.RequestParser.remove_argument`. For example: ::
|
||||
|
||||
from flask_restx import reqparse
|
||||
|
||||
parser = reqparse.RequestParser()
|
||||
parser.add_argument('foo', type=int)
|
||||
|
||||
parser_copy = parser.copy()
|
||||
parser_copy.add_argument('bar', type=int)
|
||||
|
||||
# parser_copy has both 'foo' and 'bar'
|
||||
|
||||
parser_copy.replace_argument('foo', required=True, location='json')
|
||||
# 'foo' is now a required str located in json, not an int as defined
|
||||
# by original parser
|
||||
|
||||
parser_copy.remove_argument('foo')
|
||||
# parser_copy no longer has 'foo' argument
|
||||
|
||||
File Upload
|
||||
-----------
|
||||
|
||||
To handle file upload with the :class:`~reqparse.RequestParser`,
|
||||
you need to use the `files` location
|
||||
and to set the type to :class:`~werkzeug.datastructures.FileStorage`.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from werkzeug.datastructures import FileStorage
|
||||
|
||||
upload_parser = api.parser()
|
||||
upload_parser.add_argument('file', location='files',
|
||||
type=FileStorage, required=True)
|
||||
|
||||
|
||||
@api.route('/upload/')
|
||||
@api.expect(upload_parser)
|
||||
class Upload(Resource):
|
||||
def post(self):
|
||||
args = upload_parser.parse_args()
|
||||
uploaded_file = args['file'] # This is FileStorage instance
|
||||
url = do_something_with_file(uploaded_file)
|
||||
return {'url': url}, 201
|
||||
|
||||
See the `dedicated Flask documentation section <https://flask.palletsprojects.com/en/1.1.x/patterns/fileuploads/>`_.
|
||||
|
||||
|
||||
Error Handling
|
||||
--------------
|
||||
|
||||
The default way errors are handled by the RequestParser is to abort on the
|
||||
first error that occurred. This can be beneficial when you have arguments that
|
||||
might take some time to process. However, often it is nice to have the errors
|
||||
bundled together and sent back to the client all at once. This behavior can be
|
||||
specified either at the Flask application level or on the specific
|
||||
RequestParser instance. To invoke a RequestParser with the bundling errors
|
||||
option, pass in the argument ``bundle_errors``. For example ::
|
||||
|
||||
from flask_restx import reqparse
|
||||
|
||||
parser = reqparse.RequestParser(bundle_errors=True)
|
||||
parser.add_argument('foo', type=int, required=True)
|
||||
parser.add_argument('bar', type=int, required=True)
|
||||
|
||||
# If a request comes in not containing both 'foo' and 'bar', the error that
|
||||
# will come back will look something like this.
|
||||
|
||||
{
|
||||
"message": {
|
||||
"foo": "foo error message",
|
||||
"bar": "bar error message"
|
||||
}
|
||||
}
|
||||
|
||||
# The default behavior would only return the first error
|
||||
|
||||
parser = RequestParser()
|
||||
parser.add_argument('foo', type=int, required=True)
|
||||
parser.add_argument('bar', type=int, required=True)
|
||||
|
||||
{
|
||||
"message": {
|
||||
"foo": "foo error message"
|
||||
}
|
||||
}
|
||||
|
||||
The application configuration key is "BUNDLE_ERRORS". For example ::
|
||||
|
||||
from flask import Flask
|
||||
|
||||
app = Flask(__name__)
|
||||
app.config['BUNDLE_ERRORS'] = True
|
||||
|
||||
.. warning ::
|
||||
|
||||
``BUNDLE_ERRORS`` is a global setting that overrides the ``bundle_errors``
|
||||
option in individual :class:`~reqparse.RequestParser` instances.
|
||||
|
||||
|
||||
.. _error-messages:
|
||||
|
||||
Error Messages
|
||||
--------------
|
||||
|
||||
Error messages for each field may be customized using the ``help`` parameter
|
||||
to ``Argument`` (and also ``RequestParser.add_argument``).
|
||||
|
||||
If no help parameter is provided, the error message for the field will be
|
||||
the string representation of the type error itself. If ``help`` is provided,
|
||||
then the error message will be the value of ``help``.
|
||||
|
||||
``help`` may include an interpolation token, ``{error_msg}``, that will be
|
||||
replaced with the string representation of the type error. This allows the
|
||||
message to be customized while preserving the original error::
|
||||
|
||||
from flask_restx import reqparse
|
||||
|
||||
|
||||
parser = reqparse.RequestParser()
|
||||
parser.add_argument(
|
||||
'foo',
|
||||
choices=('one', 'two'),
|
||||
help='Bad choice: {error_msg}'
|
||||
)
|
||||
|
||||
# If a request comes in with a value of "three" for `foo`:
|
||||
|
||||
{
|
||||
"message": {
|
||||
"foo": "Bad choice: three is not a valid choice",
|
||||
}
|
||||
}
|
|
@ -0,0 +1,18 @@
|
|||
Postman
|
||||
=======
|
||||
|
||||
To help you testing, you can export your API as a `Postman`_ collection.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from flask import json
|
||||
|
||||
from myapp import api
|
||||
|
||||
urlvars = False # Build query strings in URLs
|
||||
swagger = True # Export Swagger specifications
|
||||
data = api.as_postman(urlvars=urlvars, swagger=swagger)
|
||||
print(json.dumps(data))
|
||||
|
||||
|
||||
.. _Postman: https://www.getpostman.com/
|
|
@ -0,0 +1,344 @@
|
|||
.. _quickstart:
|
||||
|
||||
Quick start
|
||||
===========
|
||||
|
||||
.. currentmodule:: flask_restx
|
||||
|
||||
This guide assumes you have a working understanding of `Flask <https://flask.palletsprojects.com/>`_,
|
||||
and that you have already installed both Flask and Flask-RESTX.
|
||||
If not, then follow the steps in the :ref:`installation` section.
|
||||
|
||||
|
||||
Migrate from Flask-RESTPlus
|
||||
---------------------------
|
||||
|
||||
.. warning:: The *migration* commands provided below are for illustration
|
||||
purposes.
|
||||
You may need to adapt them to properly fit your needs.
|
||||
We also recommend you make a backup of your project prior running them.
|
||||
|
||||
At this point, Flask-RESTX remains 100% compatible with Flask-RESTPlus' API.
|
||||
All you need to do is update your requirements to use Flask-RESTX instead of
|
||||
Flask-RESTPlus. Then you need to update all your imports.
|
||||
This can be done using something like:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
find . -type f -name "*.py" | xargs sed -i "s/flask_restplus/flask_restx/g"
|
||||
|
||||
Finally, you will need to update your configuration options (described `here
|
||||
<configuration.html>`_). Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
find . -type f -name "*.py" | xargs sed -i "s/RESTPLUS_/RESTX_/g"
|
||||
|
||||
|
||||
Initialization
|
||||
--------------
|
||||
|
||||
As every other extension, you can initialize it with an application object:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from flask import Flask
|
||||
from flask_restx import Api
|
||||
|
||||
app = Flask(__name__)
|
||||
api = Api(app)
|
||||
|
||||
or lazily with the factory pattern:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from flask import Flask
|
||||
from flask_restx import Api
|
||||
|
||||
api = Api()
|
||||
|
||||
app = Flask(__name__)
|
||||
api.init_app(app)
|
||||
|
||||
|
||||
A Minimal API
|
||||
-------------
|
||||
|
||||
A minimal Flask-RESTX API looks like this:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from flask import Flask
|
||||
from flask_restx import Resource, Api
|
||||
|
||||
app = Flask(__name__)
|
||||
api = Api(app)
|
||||
|
||||
@api.route('/hello')
|
||||
class HelloWorld(Resource):
|
||||
def get(self):
|
||||
return {'hello': 'world'}
|
||||
|
||||
if __name__ == '__main__':
|
||||
app.run(debug=True)
|
||||
|
||||
|
||||
Save this as api.py and run it using your Python interpreter.
|
||||
Note that we've enabled `Flask debugging <https://flask.palletsprojects.com/quickstart/#debug-mode>`_
|
||||
mode to provide code reloading and better error messages.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ python api.py
|
||||
* Running on http://127.0.0.1:5000/
|
||||
* Restarting with reloader
|
||||
|
||||
|
||||
.. warning::
|
||||
|
||||
Debug mode should never be used in a production environment!
|
||||
|
||||
Now open up a new prompt to test out your API using curl:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ curl http://127.0.0.1:5000/hello
|
||||
{"hello": "world"}
|
||||
|
||||
|
||||
You can also use the automatic documentation on you API root (by default).
|
||||
In this case: http://127.0.0.1:5000/.
|
||||
See :ref:`swaggerui` for a complete documentation on the automatic documentation.
|
||||
|
||||
.. note ::
|
||||
Initializing the :class:`~Api` object always registers the root endpoint ``/``
|
||||
even if the :ref:`swaggerui` path is changed. If you wish to use the root
|
||||
endpoint ``/`` for other purposes, you must register it before initializing
|
||||
the :class:`~Api` object.
|
||||
|
||||
|
||||
Resourceful Routing
|
||||
-------------------
|
||||
The main building block provided by Flask-RESTX are resources.
|
||||
Resources are built on top of :doc:`Flask pluggable views <flask:views>`,
|
||||
giving you easy access to multiple HTTP methods just by defining methods on your resource.
|
||||
A basic CRUD resource for a todo application (of course) looks like this:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from flask import Flask, request
|
||||
from flask_restx import Resource, Api
|
||||
|
||||
app = Flask(__name__)
|
||||
api = Api(app)
|
||||
|
||||
todos = {}
|
||||
|
||||
@api.route('/<string:todo_id>')
|
||||
class TodoSimple(Resource):
|
||||
def get(self, todo_id):
|
||||
return {todo_id: todos[todo_id]}
|
||||
|
||||
def put(self, todo_id):
|
||||
todos[todo_id] = request.form['data']
|
||||
return {todo_id: todos[todo_id]}
|
||||
|
||||
if __name__ == '__main__':
|
||||
app.run(debug=True)
|
||||
|
||||
You can try it like this:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ curl http://localhost:5000/todo1 -d "data=Remember the milk" -X PUT
|
||||
{"todo1": "Remember the milk"}
|
||||
$ curl http://localhost:5000/todo1
|
||||
{"todo1": "Remember the milk"}
|
||||
$ curl http://localhost:5000/todo2 -d "data=Change my brakepads" -X PUT
|
||||
{"todo2": "Change my brakepads"}
|
||||
$ curl http://localhost:5000/todo2
|
||||
{"todo2": "Change my brakepads"}
|
||||
|
||||
|
||||
Or from python if you have the `Requests <https://docs.python-requests.org/>`_ library installed:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
>>> from requests import put, get
|
||||
>>> put('http://localhost:5000/todo1', data={'data': 'Remember the milk'}).json()
|
||||
{u'todo1': u'Remember the milk'}
|
||||
>>> get('http://localhost:5000/todo1').json()
|
||||
{u'todo1': u'Remember the milk'}
|
||||
>>> put('http://localhost:5000/todo2', data={'data': 'Change my brakepads'}).json()
|
||||
{u'todo2': u'Change my brakepads'}
|
||||
>>> get('http://localhost:5000/todo2').json()
|
||||
{u'todo2': u'Change my brakepads'}
|
||||
|
||||
Flask-RESTX understands multiple kinds of return values from view methods.
|
||||
Similar to Flask, you can return any iterable and it will be converted into a response,
|
||||
including raw Flask response objects.
|
||||
Flask-RESTX also support setting the response code and response headers using multiple return values,
|
||||
as shown below:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
class Todo1(Resource):
|
||||
def get(self):
|
||||
# Default to 200 OK
|
||||
return {'task': 'Hello world'}
|
||||
|
||||
class Todo2(Resource):
|
||||
def get(self):
|
||||
# Set the response code to 201
|
||||
return {'task': 'Hello world'}, 201
|
||||
|
||||
class Todo3(Resource):
|
||||
def get(self):
|
||||
# Set the response code to 201 and return custom headers
|
||||
return {'task': 'Hello world'}, 201, {'Etag': 'some-opaque-string'}
|
||||
|
||||
|
||||
Endpoints
|
||||
---------
|
||||
|
||||
Many times in an API, your resource will have multiple URLs.
|
||||
You can pass multiple URLs to the :meth:`~Api.add_resource` method or to the :meth:`~Api.route` decorator,
|
||||
both on the :class:`~Api` object.
|
||||
Each one will be routed to your :class:`~Resource`:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
api.add_resource(HelloWorld, '/hello', '/world')
|
||||
|
||||
# or
|
||||
|
||||
@api.route('/hello', '/world')
|
||||
class HelloWorld(Resource):
|
||||
pass
|
||||
|
||||
You can also match parts of the path as variables to your resource methods.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
api.add_resource(Todo, '/todo/<int:todo_id>', endpoint='todo_ep')
|
||||
|
||||
# or
|
||||
|
||||
@api.route('/todo/<int:todo_id>', endpoint='todo_ep')
|
||||
class HelloWorld(Resource):
|
||||
pass
|
||||
|
||||
.. note ::
|
||||
|
||||
If a request does not match any of your application's endpoints,
|
||||
Flask-RESTX will return a 404 error message with suggestions of other
|
||||
endpoints that closely match the requested endpoint.
|
||||
This can be disabled by setting ``RESTX_ERROR_404_HELP`` to ``False`` in your application config.
|
||||
|
||||
|
||||
Argument Parsing
|
||||
----------------
|
||||
|
||||
While Flask provides easy access to request data (i.e. querystring or POST form encoded data),
|
||||
it's still a pain to validate form data.
|
||||
Flask-RESTX has built-in support for request data validation
|
||||
using a library similar to :mod:`python:argparse`.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from flask_restx import reqparse
|
||||
|
||||
parser = reqparse.RequestParser()
|
||||
parser.add_argument('rate', type=int, help='Rate to charge for this resource')
|
||||
args = parser.parse_args()
|
||||
|
||||
.. note ::
|
||||
|
||||
Unlike the :mod:`python:argparse` module, :meth:`~reqparse.RequestParser.parse_args`
|
||||
returns a Python dictionary instead of a custom data structure.
|
||||
|
||||
Using the :class:`~reqparse.RequestParser` class also gives you same error messages for free.
|
||||
If an argument fails to pass validation,
|
||||
Flask-RESTX will respond with a 400 Bad Request and a response highlighting the error.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ curl -d 'rate=foo' http://127.0.0.1:5000/todos
|
||||
{'status': 400, 'message': 'foo cannot be converted to int'}
|
||||
|
||||
|
||||
The :mod:`~inputs` module provides a number of included common conversion
|
||||
functions such as :func:`~inputs.date` and :func:`~inputs.url`.
|
||||
|
||||
Calling :meth:`~reqparse.RequestParser.parse_args` with ``strict=True`` ensures that an error is thrown if
|
||||
the request includes arguments your parser does not define.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
args = parser.parse_args(strict=True)
|
||||
|
||||
|
||||
Data Formatting
|
||||
---------------
|
||||
|
||||
By default, all fields in your return iterable will be rendered as-is.
|
||||
While this works great when you're just dealing with Python data structures,
|
||||
it can become very frustrating when working with objects.
|
||||
To solve this problem, Flask-RESTX provides the :mod:`fields` module and the
|
||||
:meth:`marshal_with` decorator.
|
||||
Similar to the Django ORM and WTForm,
|
||||
you use the ``fields`` module to describe the structure of your response.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from flask import Flask
|
||||
from flask_restx import fields, Api, Resource
|
||||
|
||||
app = Flask(__name__)
|
||||
api = Api(app)
|
||||
|
||||
model = api.model('Model', {
|
||||
'task': fields.String,
|
||||
'uri': fields.Url('todo_ep')
|
||||
})
|
||||
|
||||
class TodoDao(object):
|
||||
def __init__(self, todo_id, task):
|
||||
self.todo_id = todo_id
|
||||
self.task = task
|
||||
|
||||
# This field will not be sent in the response
|
||||
self.status = 'active'
|
||||
|
||||
@api.route('/todo')
|
||||
class Todo(Resource):
|
||||
@api.marshal_with(model)
|
||||
def get(self, **kwargs):
|
||||
return TodoDao(todo_id='my_todo', task='Remember the milk')
|
||||
|
||||
|
||||
The above example takes a python object and prepares it to be serialized.
|
||||
The :meth:`~Api.marshal_with` decorator will apply the transformation described by ``model``.
|
||||
The only field extracted from the object is ``task``.
|
||||
The :class:`fields.Url` field is a special field that takes an endpoint name
|
||||
and generates a URL for that endpoint in the response.
|
||||
Using the :meth:`~Api.marshal_with` decorator also document the output in the swagger specifications.
|
||||
Many of the field types you need are already included.
|
||||
See the :mod:`fields` guide for a complete list.
|
||||
|
||||
Order Preservation
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
By default, fields order is not preserved as this have a performance drop effect.
|
||||
If you still require fields order preservation, you can pass a ``ordered=True``
|
||||
parameter to some classes or function to force order preservation:
|
||||
|
||||
- globally on :class:`Api`: ``api = Api(ordered=True)``
|
||||
- globally on :class:`Namespace`: ``ns = Namespace(ordered=True)``
|
||||
- locally on :func:`marshal`: ``return marshal(data, fields, ordered=True)``
|
||||
|
||||
|
||||
Full example
|
||||
------------
|
||||
|
||||
See the :doc:`example` section for fully functional example.
|
|
@ -0,0 +1,275 @@
|
|||
.. _scaling:
|
||||
|
||||
Scaling your project
|
||||
====================
|
||||
|
||||
.. currentmodule:: flask_restx
|
||||
|
||||
This page covers building a slightly more complex Flask-RESTX app that will
|
||||
cover out some best practices when setting up a real-world Flask-RESTX-based API.
|
||||
The :ref:`quickstart` section is great for getting started with your first Flask-RESTX app,
|
||||
so if you're new to Flask-RESTX you'd be better off checking that out first.
|
||||
|
||||
|
||||
Multiple namespaces
|
||||
-------------------
|
||||
|
||||
There are many different ways to organize your Flask-RESTX app,
|
||||
but here we'll describe one that scales pretty well with larger apps
|
||||
and maintains a nice level of organization.
|
||||
|
||||
Flask-RESTX provides a way to use almost the same pattern as Flask's `blueprint`.
|
||||
The main idea is to split your app into reusable namespaces.
|
||||
|
||||
Here's an example directory structure::
|
||||
|
||||
project/
|
||||
├── app.py
|
||||
├── core
|
||||
│ ├── __init__.py
|
||||
│ ├── utils.py
|
||||
│ └── ...
|
||||
└── apis
|
||||
├── __init__.py
|
||||
├── namespace1.py
|
||||
├── namespace2.py
|
||||
├── ...
|
||||
└── namespaceX.py
|
||||
|
||||
|
||||
The `app` module will serve as a main application entry point following one of the classic
|
||||
Flask patterns (See :doc:`flask:patterns/packages` and :doc:`flask:patterns/appfactories`).
|
||||
|
||||
The `core` module is an example, it contains the business logic.
|
||||
In fact, you call it whatever you want, and there can be many packages.
|
||||
|
||||
The `apis` package will be your main API entry point that you need to import and register on the application,
|
||||
whereas the namespaces modules are reusable namespaces designed like you would do with Flask's Blueprint.
|
||||
|
||||
A namespace module contains models and resources declarations.
|
||||
For example:
|
||||
|
||||
.. code-block:: Python
|
||||
|
||||
from flask_restx import Namespace, Resource, fields
|
||||
|
||||
api = Namespace('cats', description='Cats related operations')
|
||||
|
||||
cat = api.model('Cat', {
|
||||
'id': fields.String(required=True, description='The cat identifier'),
|
||||
'name': fields.String(required=True, description='The cat name'),
|
||||
})
|
||||
|
||||
CATS = [
|
||||
{'id': 'felix', 'name': 'Felix'},
|
||||
]
|
||||
|
||||
@api.route('/')
|
||||
class CatList(Resource):
|
||||
@api.doc('list_cats')
|
||||
@api.marshal_list_with(cat)
|
||||
def get(self):
|
||||
'''List all cats'''
|
||||
return CATS
|
||||
|
||||
@api.route('/<id>')
|
||||
@api.param('id', 'The cat identifier')
|
||||
@api.response(404, 'Cat not found')
|
||||
class Cat(Resource):
|
||||
@api.doc('get_cat')
|
||||
@api.marshal_with(cat)
|
||||
def get(self, id):
|
||||
'''Fetch a cat given its identifier'''
|
||||
for cat in CATS:
|
||||
if cat['id'] == id:
|
||||
return cat
|
||||
api.abort(404)
|
||||
|
||||
|
||||
The `apis.__init__` module should aggregate them:
|
||||
|
||||
.. code-block:: Python
|
||||
|
||||
from flask_restx import Api
|
||||
|
||||
from .namespace1 import api as ns1
|
||||
from .namespace2 import api as ns2
|
||||
# ...
|
||||
from .namespaceX import api as nsX
|
||||
|
||||
api = Api(
|
||||
title='My Title',
|
||||
version='1.0',
|
||||
description='A description',
|
||||
# All API metadatas
|
||||
)
|
||||
|
||||
api.add_namespace(ns1)
|
||||
api.add_namespace(ns2)
|
||||
# ...
|
||||
api.add_namespace(nsX)
|
||||
|
||||
|
||||
You can define custom url-prefixes for namespaces during registering them in your API.
|
||||
You don't have to bind url-prefix while declaration of Namespace object.
|
||||
|
||||
.. code-block:: Python
|
||||
|
||||
from flask_restx import Api
|
||||
|
||||
from .namespace1 import api as ns1
|
||||
from .namespace2 import api as ns2
|
||||
# ...
|
||||
from .namespaceX import api as nsX
|
||||
|
||||
api = Api(
|
||||
title='My Title',
|
||||
version='1.0',
|
||||
description='A description',
|
||||
# All API metadatas
|
||||
)
|
||||
|
||||
api.add_namespace(ns1, path='/prefix/of/ns1')
|
||||
api.add_namespace(ns2, path='/prefix/of/ns2')
|
||||
# ...
|
||||
api.add_namespace(nsX, path='/prefix/of/nsX')
|
||||
|
||||
|
||||
Using this pattern, you simply have to register your API in `app.py` like that:
|
||||
|
||||
.. code-block:: Python
|
||||
|
||||
from flask import Flask
|
||||
from apis import api
|
||||
|
||||
app = Flask(__name__)
|
||||
api.init_app(app)
|
||||
|
||||
app.run(debug=True)
|
||||
|
||||
|
||||
Use With Blueprints
|
||||
-------------------
|
||||
|
||||
See :doc:`flask:blueprints` in the Flask documentation for what blueprints are and why you should use them.
|
||||
Here's an example of how to link an :class:`Api` up to a :class:`~flask.Blueprint`. Nested Blueprints are
|
||||
not supported.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from flask import Blueprint
|
||||
from flask_restx import Api
|
||||
|
||||
blueprint = Blueprint('api', __name__)
|
||||
api = Api(blueprint)
|
||||
# ...
|
||||
|
||||
Using a `blueprint` will allow you to mount your API on any url prefix and/or subdomain
|
||||
in you application:
|
||||
|
||||
|
||||
.. code-block:: Python
|
||||
|
||||
from flask import Flask
|
||||
from apis import blueprint as api
|
||||
|
||||
app = Flask(__name__)
|
||||
app.register_blueprint(api, url_prefix='/api/1')
|
||||
app.run(debug=True)
|
||||
|
||||
.. note ::
|
||||
|
||||
Calling :meth:`Api.init_app` is not required here because registering the
|
||||
blueprint with the app takes care of setting up the routing for the application.
|
||||
|
||||
.. note::
|
||||
|
||||
When using blueprints, remember to use the blueprint name with :func:`~flask.url_for`:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# without blueprint
|
||||
url_for('my_api_endpoint')
|
||||
|
||||
# with blueprint
|
||||
url_for('api.my_api_endpoint')
|
||||
|
||||
|
||||
Multiple APIs with reusable namespaces
|
||||
--------------------------------------
|
||||
|
||||
Sometimes you need to maintain multiple versions of an API.
|
||||
If you built your API using namespaces composition,
|
||||
it's quite simple to scale it to multiple APIs.
|
||||
|
||||
Given the previous layout, we can migrate it to the following directory structure::
|
||||
|
||||
project/
|
||||
├── app.py
|
||||
├── apiv1.py
|
||||
├── apiv2.py
|
||||
└── apis
|
||||
├── __init__.py
|
||||
├── namespace1.py
|
||||
├── namespace2.py
|
||||
├── ...
|
||||
└── namespaceX.py
|
||||
|
||||
Each `apis/namespaceX` module will have the following pattern:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from flask_restx import Namespace, Resource
|
||||
|
||||
api = Namespace('mynamespace', 'Namespace Description' )
|
||||
|
||||
@api.route("/")
|
||||
class Myclass(Resource):
|
||||
def get(self):
|
||||
return {}
|
||||
|
||||
Each `apivX` module will have the following pattern:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from flask import Blueprint
|
||||
from flask_restx import Api
|
||||
|
||||
api = Api(blueprint)
|
||||
|
||||
from .apis.namespace1 import api as ns1
|
||||
from .apis.namespace2 import api as ns2
|
||||
# ...
|
||||
from .apis.namespaceX import api as nsX
|
||||
|
||||
blueprint = Blueprint('api', __name__, url_prefix='/api/1')
|
||||
api = Api(blueprint
|
||||
title='My Title',
|
||||
version='1.0',
|
||||
description='A description',
|
||||
# All API metadatas
|
||||
)
|
||||
|
||||
api.add_namespace(ns1)
|
||||
api.add_namespace(ns2)
|
||||
# ...
|
||||
api.add_namespace(nsX)
|
||||
|
||||
And the app will simply mount them:
|
||||
|
||||
.. code-block:: Python
|
||||
|
||||
from flask import Flask
|
||||
from api1 import blueprint as api1
|
||||
from apiX import blueprint as apiX
|
||||
|
||||
app = Flask(__name__)
|
||||
app.register_blueprint(api1)
|
||||
app.register_blueprint(apiX)
|
||||
app.run(debug=True)
|
||||
|
||||
|
||||
These are only proposals and you can do whatever suits your needs.
|
||||
Look at the `github repository examples folder`_ for more complete examples.
|
||||
|
||||
.. _github repository examples folder: https://github.com/python-restx/flask-restx/tree/master/examples
|