Skip to content

Readme update in progress #586

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 53 commits into from
Apr 13, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
53 commits
Select commit Hold shift + click to select a range
0a33546
Readme update in progress
santiatpml Apr 5, 2023
c006523
Updated hugs emoji
santiatpml Apr 5, 2023
ba8d050
Readme added dashboard image
santiatpml Apr 5, 2023
ae520e6
Getting started in progress
santiatpml Apr 5, 2023
d822e43
Getting started in progress
santiatpml Apr 5, 2023
a7e9ce4
Added notebooks image
santiatpml Apr 5, 2023
7c8b982
Updated dashboard image and some edits
santiatpml Apr 5, 2023
3ae4024
Added protobuf for finbert support and text-classification readme in …
santiatpml Apr 5, 2023
0daba37
Using sql instead of json for highlighting
santiatpml Apr 5, 2023
0e51c29
update dependencies (#588)
montanalow Apr 5, 2023
3e06339
Updates to text-classification
santiatpml Apr 6, 2023
755580a
First version of text classification
santiatpml Apr 6, 2023
345eb79
Added grammatical correctness
santiatpml Apr 6, 2023
b6cfcdd
Added zero-shot classification
santiatpml Apr 6, 2023
d025f12
readme for token classification
santiatpml Apr 7, 2023
91557e3
Moved results from sql to json
santiatpml Apr 7, 2023
4ffae4e
Images for different tasks
santiatpml Apr 7, 2023
4f21192
Updated table of contents
santiatpml Apr 7, 2023
db9523c
Update to 0.7.4 (#591)
Apr 7, 2023
e02eaff
fix for np.float32 serialization (#589)
santiatpml Apr 7, 2023
8c3ee5e
Readme update in progress
santiatpml Apr 5, 2023
b6476eb
Updated hugs emoji
santiatpml Apr 5, 2023
5a03402
Readme added dashboard image
santiatpml Apr 5, 2023
970b7be
Getting started in progress
santiatpml Apr 5, 2023
3938ba5
Getting started in progress
santiatpml Apr 5, 2023
7edfbf4
Added notebooks image
santiatpml Apr 5, 2023
cb9b2d4
Updated dashboard image and some edits
santiatpml Apr 5, 2023
2f33c43
Added protobuf for finbert support and text-classification readme in …
santiatpml Apr 5, 2023
47e0cea
Using sql instead of json for highlighting
santiatpml Apr 5, 2023
ad16887
Updates to text-classification
santiatpml Apr 6, 2023
8721ce8
First version of text classification
santiatpml Apr 6, 2023
daf045c
Added grammatical correctness
santiatpml Apr 6, 2023
5749330
Added zero-shot classification
santiatpml Apr 6, 2023
a2bcd1d
readme for token classification
santiatpml Apr 7, 2023
6c3a98c
Moved results from sql to json
santiatpml Apr 7, 2023
760b520
Images for different tasks
santiatpml Apr 7, 2023
fca5ef2
Updated table of contents
santiatpml Apr 7, 2023
c347f9b
Documentation for more tasks
santiatpml Apr 7, 2023
a1ef779
Updated with more tasks
santiatpml Apr 7, 2023
f94cc3c
Expanded text generation section
santiatpml Apr 7, 2023
f8891c2
Removed Table QA from toc
santiatpml Apr 7, 2023
8381fe8
Text2text generation
santiatpml Apr 10, 2023
592fc59
Added fill mask section
santiatpml Apr 10, 2023
42a6541
Started Vector DB section
santiatpml Apr 11, 2023
c728d7e
First version of vector databases
santiatpml Apr 11, 2023
3ee5b8c
Reset docker compose and docker local to original
santiatpml Apr 11, 2023
c9596a7
Update README.md
santiatpml Apr 12, 2023
bd197a6
Update README.md
santiatpml Apr 12, 2023
629ffe0
Update README.md
santiatpml Apr 12, 2023
a3f45c9
Update README.md
santiatpml Apr 12, 2023
d2bd901
Update README.md
santiatpml Apr 12, 2023
0016d07
Update README.md
santiatpml Apr 12, 2023
27e1029
Updated tagline
santiatpml Apr 12, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
856 changes: 811 additions & 45 deletions README.md

Large diffs are not rendered by default.

Binary file added pgml-docs/docs/images/dashboard.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added pgml-docs/docs/images/fill-mask.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added pgml-docs/docs/images/notebooks.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added pgml-docs/docs/images/question-answering.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added pgml-docs/docs/images/sentence-similarity.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added pgml-docs/docs/images/summarization.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added pgml-docs/docs/images/text-classification.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added pgml-docs/docs/images/text-generation.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added pgml-docs/docs/images/token-classification.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added pgml-docs/docs/images/translation.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
571 changes: 317 additions & 254 deletions pgml-extension/Cargo.lock

Large diffs are not rendered by default.

6 changes: 3 additions & 3 deletions pgml-extension/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,8 @@ python = ["pyo3"]
cuda = ["xgboost/cuda", "lightgbm/cuda"]

[dependencies]
pgx = "=0.7.1"
pgx-pg-sys = "=0.7.1"
pgx = "=0.7.4"
pgx-pg-sys = "=0.7.4"
xgboost = { git="https://github.com/postgresml/rust-xgboost.git", branch = "master" }
once_cell = "1"
rand = "0.8"
Expand Down Expand Up @@ -48,7 +48,7 @@ flate2 = "1.0"
csv = "1.1"

[dev-dependencies]
pgx-tests = "=0.7.1"
pgx-tests = "=0.7.4"

[profile.dev]
panic = "unwind"
Expand Down
2 changes: 1 addition & 1 deletion pgml-extension/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ RUN useradd postgresml -m -s /bin/bash -G sudo
RUN echo 'postgresml ALL=(ALL) NOPASSWD: ALL' >> /etc/sudoers
USER postgresml
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y
RUN $HOME/.cargo/bin/cargo install cargo-pgx --version "0.7.1"
RUN $HOME/.cargo/bin/cargo install cargo-pgx --version "0.7.4"
RUN $HOME/.cargo/bin/cargo pgx init
RUN curl https://www.postgresql.org/media/keys/ACCC4CF8.asc | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/apt.postgresql.org.gpg >/dev/null
RUN sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
Expand Down
90 changes: 90 additions & 0 deletions pgml-extension/examples/finetune.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
-- Exit on error (psql)
\set ON_ERROR_STOP true
\timing on


SELECT pgml.load_dataset('kde4', kwargs => '{"lang1": "en", "lang2": "es"}');
CREATE OR REPLACE VIEW kde4_en_to_es AS
SELECT translation->>'en' AS "en", translation->>'es' AS "es"
FROM pgml.kde4
LIMIT 10;
SELECT pgml.tune(
'Translate English to Spanish',
task => 'translation',
relation_name => 'kde4_en_to_es',
y_column_name => 'es', -- translate into spanish
model_name => 'Helsinki-NLP/opus-mt-en-es',
hyperparams => '{
"learning_rate": 2e-5,
"per_device_train_batch_size": 16,
"per_device_eval_batch_size": 16,
"num_train_epochs": 1,
"weight_decay": 0.01,
"max_length": 128
}',
test_size => 0.5,
test_sampling => 'last'
);

SELECT pgml.load_dataset('imdb');
SELECT pgml.tune(
'IMDB Review Sentiment',
task => 'text-classification',
relation_name => 'pgml.imdb',
y_column_name => 'label',
model_name => 'distilbert-base-uncased',
hyperparams => '{
"learning_rate": 2e-5,
"per_device_train_batch_size": 16,
"per_device_eval_batch_size": 16,
"num_train_epochs": 1,
"weight_decay": 0.01
}',
test_size => 0.5,
test_sampling => 'last'
);
SELECT pgml.predict('IMDB Review Sentiment', 'I love SQL');

SELECT pgml.load_dataset('squad_v2');
SELECT pgml.tune(
'SQuAD Q&A v2',
'question-answering',
'pgml.squad_v2',
'answers',
'deepset/roberta-base-squad2',
hyperparams => '{
"evaluation_strategy": "epoch",
"learning_rate": 2e-5,
"per_device_train_batch_size": 16,
"per_device_eval_batch_size": 16,
"num_train_epochs": 1,
"weight_decay": 0.01,
"max_length": 384,
"stride": 128
}',
test_size => 11873,
test_sampling => 'last'
);


SELECT pgml.load_dataset('billsum', kwargs => '{"split": "ca_test"}');
CREATE OR REPLACE VIEW billsum_training_data
AS SELECT title || '\n' || text AS text, summary FROM pgml.billsum;
SELECT pgml.tune(
'Legal Summarization',
task => 'summarization',
relation_name => 'billsum_training_data',
y_column_name => 'summary',
model_name => 'sshleifer/distilbart-xsum-12-1',
hyperparams => '{
"learning_rate": 2e-5,
"per_device_train_batch_size": 2,
"per_device_eval_batch_size": 2,
"num_train_epochs": 1,
"weight_decay": 0.01,
"max_input_length": 1024,
"max_summary_length": 128
}',
test_size => 0.01,
test_sampling => 'last'
);
129 changes: 50 additions & 79 deletions pgml-extension/examples/transformers.sql
Original file line number Diff line number Diff line change
Expand Up @@ -32,89 +32,60 @@ SELECT pgml.transform(
'Dominic Cobb is the foremost practitioner of the artistic science of extraction, inserting oneself into a subject''s dreams to obtain hidden information without the subject knowing, a concept taught to him by his professor father-in-law, Dr. Stephen Miles. Dom''s associates are Miles'' former students, who Dom requires as he has given up being the dream architect for reasons he won''t disclose. Dom''s primary associate, Arthur, believes it has something to do with Dom''s deceased wife, Mal, who often figures prominently and violently in those dreams, or Dom''s want to "go home" (get back to his own reality, which includes two young children). Dom''s work is generally in corporate espionage. As the subjects don''t want the information to get into the wrong hands, the clients have zero tolerance for failure. Dom is also a wanted man, as many of his past subjects have learned what Dom has done to them. One of those subjects, Mr. Saito, offers Dom a job he can''t refuse: to take the concept one step further into inception, namely planting thoughts into the subject''s dreams without them knowing. Inception can fundamentally alter that person as a being. Saito''s target is Robert Michael Fischer, the heir to an energy business empire, which has the potential to rule the world if continued on the current trajectory. Beyond the complex logistics of the dream architecture of the case and some unknowns concerning Fischer, the biggest obstacles in success for the team become worrying about one aspect of inception which Cobb fails to disclose to the other team members prior to the job, and Cobb''s newest associate Ariadne''s belief that Cobb''s own subconscious, especially as it relates to Mal, may be taking over what happens in the dreams.'
]
);
SELECT pgml.transform(
inputs => ARRAY[
'I love how amazingly simple ML has become!',
'I hate doing mundane and thankless tasks. ☹️'
],
task => '{"task": "text-classification",
"model": "finiteautomata/bertweet-base-sentiment-analysis"
}'::JSONB
) AS positivity;

SELECT pgml.load_dataset('kde4', kwargs => '{"lang1": "en", "lang2": "es"}');
CREATE OR REPLACE VIEW kde4_en_to_es AS
SELECT translation->>'en' AS "en", translation->>'es' AS "es"
FROM pgml.kde4
LIMIT 10;
SELECT pgml.tune(
'Translate English to Spanish',
task => 'translation',
relation_name => 'kde4_en_to_es',
y_column_name => 'es', -- translate into spanish
model_name => 'Helsinki-NLP/opus-mt-en-es',
hyperparams => '{
"learning_rate": 2e-5,
"per_device_train_batch_size": 16,
"per_device_eval_batch_size": 16,
"num_train_epochs": 1,
"weight_decay": 0.01,
"max_length": 128
}',
test_size => 0.5,
test_sampling => 'last'
);
SELECT pgml.transform(
task => 'text-classification',
inputs => ARRAY[
'I love how amazingly simple ML has become!',
'I hate doing mundane and thankless tasks. ☹️'
]
) AS positivity;

SELECT pgml.transform(
inputs => ARRAY[
'Stocks rallied and the British pound gained.',
'Stocks making the biggest moves midday: Nvidia, Palantir and more'
],
task => '{"task": "text-classification",
"model": "ProsusAI/finbert"
}'::JSONB
) AS market_sentiment;

SELECT pgml.load_dataset('imdb');
SELECT pgml.tune(
'IMDB Review Sentiment',
task => 'text-classification',
relation_name => 'pgml.imdb',
y_column_name => 'label',
model_name => 'distilbert-base-uncased',
hyperparams => '{
"learning_rate": 2e-5,
"per_device_train_batch_size": 16,
"per_device_eval_batch_size": 16,
"num_train_epochs": 1,
"weight_decay": 0.01
}',
test_size => 0.5,
test_sampling => 'last'
SELECT pgml.transform(
inputs => ARRAY[
'I have a problem with my iphone that needs to be resolved asap!!'
],
task => '{"task": "zero-shot-classification",
"model": "roberta-large-mnli"
}'::JSONB,
args => '{"candidate_labels": ["urgent", "not urgent", "phone", "tablet", "computer"]
}'::JSONB
) AS zero_shot;

SELECT pgml.transform(
inputs => ARRAY[
'Hugging Face is a French company based in New York City.'
],
task => 'token-classification'
);
SELECT pgml.predict('IMDB Review Sentiment', 'I love SQL');

SELECT pgml.load_dataset('squad_v2');
SELECT pgml.tune(
'SQuAD Q&A v2',
SELECT pgml.transform(
'question-answering',
'pgml.squad_v2',
'answers',
'deepset/roberta-base-squad2',
hyperparams => '{
"evaluation_strategy": "epoch",
"learning_rate": 2e-5,
"per_device_train_batch_size": 16,
"per_device_eval_batch_size": 16,
"num_train_epochs": 1,
"weight_decay": 0.01,
"max_length": 384,
"stride": 128
}',
test_size => 11873,
test_sampling => 'last'
);
inputs => ARRAY[
'{
"question": "Am I dreaming?",
"context": "I got a good nights sleep last night and started a simple tutorial over my cup of morning coffee. The capabilities seem unreal, compared to what I came to expect from the simple SQL standard I studied so long ago. The answer is staring me in the face, and I feel the uncanny call from beyond the screen to check the results."
}'
]
) AS answer;


SELECT pgml.load_dataset('billsum', kwargs => '{"split": "ca_test"}');
CREATE OR REPLACE VIEW billsum_training_data
AS SELECT title || '\n' || text AS text, summary FROM pgml.billsum;
SELECT pgml.tune(
'Legal Summarization',
task => 'summarization',
relation_name => 'billsum_training_data',
y_column_name => 'summary',
model_name => 'sshleifer/distilbart-xsum-12-1',
hyperparams => '{
"learning_rate": 2e-5,
"per_device_train_batch_size": 2,
"per_device_eval_batch_size": 2,
"num_train_epochs": 1,
"weight_decay": 0.01,
"max_input_length": 1024,
"max_summary_length": 128
}',
test_size => 0.01,
test_sampling => 'last'
);
12 changes: 9 additions & 3 deletions pgml-extension/src/bindings/transformers.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
import math
import shutil
import time

import numpy as np

import datasets
from rouge import Rouge
Expand Down Expand Up @@ -40,6 +40,12 @@
__cache_transformer_by_model_id = {}
__cache_sentence_transformer_by_name = {}

class NumpyJSONEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, np.float32):
return float(obj)
return super().default(obj)

def transform(task, args, inputs):
task = json.loads(task)
args = json.loads(args)
Expand All @@ -50,7 +56,7 @@ def transform(task, args, inputs):
if pipe.task == "question-answering":
inputs = [json.loads(input) for input in inputs]

return json.dumps(pipe(inputs, **args))
return json.dumps(pipe(inputs, **args), cls = NumpyJSONEncoder)

def embed(transformer, text, kwargs):
kwargs = json.loads(kwargs)
Expand Down Expand Up @@ -101,7 +107,7 @@ def tokenize_summarization(tokenizer, max_length, x, y):
return datasets.Dataset.from_dict(encoding.data)

def tokenize_text_generation(tokenizer, max_length, y):
encoding = tokenizer(y, max_length=max_length)
encoding = tokenizer(y, max_length=max_length, truncation=True, padding="max_length")
return datasets.Dataset.from_dict(encoding.data)

def tokenize_question_answering(tokenizer, max_length, x, y):
Expand Down
1 change: 1 addition & 0 deletions pgml-extension/tests/test.sql
Original file line number Diff line number Diff line change
Expand Up @@ -27,3 +27,4 @@ SELECT pgml.load_dataset('wine');
\i examples/multi_classification.sql
\i examples/regression.sql
\i examples/vectors.sql