🔋Building Artificial Intelligence and Data Science models for a living
🎓PhD Maths
Minted this free collectible in celebration of @orbapp.lens funding!
The era of everyday web3 social starts here!
LFG!🙌
I agree with the recommendation part and its challenges. Building a robust recsys is very challenging because it mixes a variety of technologies.
I’m working on coupling machine learning models with reasoning capabilities; specifically, I’m injecting mathematical logics into the learning algorithms to extract expressive theories from data. Moreover, by drawing inspiration from this, I also plan to develop explainable generative AI models to address crucial issues, such as gender and racial biases.
Gm Jessy!
I will have a look into your application. What LLM have you used?
I wasn't fond of @attraction.lens, but I will also catch up! What was their contribution?
Gm Ryan! Thank you for your questions and insights.
I will look into it; from an initial glimpse, it seems very interesting.
You are willing to use a Large Language Model, or LLM for short.
Two ways to work with LLM are fine-tuning and prompting. The former requires additional expertise and computational resources to adapt an LLM to particular cases, while the latter is for end-users wanting to use such models by harnessing the potential of a pre-trained model. Fine-tuning is the best way to instruct the machine for specific tasks, as you can teach it directly. Prompting, on the other hand, depends on how good you are at engineering the inputs to such models; moreover, if you use some interface, such as ChatGPT et al., you don't need to know coding but has some limitations (e.g., the length of the tokens in output), whereas more professional prompting requires it but is more flexible.
It is immediate, therefore, to apply both to matching tonality. I suggest fine-tuning an LLM for such a task. On the other hand, prompting could do fine to grasp the idea; for instance, ask an LLM to write similarly by providing examples or with a specific tonality (if you don't know the available tonalities, you can ask the LLM for a list of them).
I'm unsure what you mean by "how it can understand my collected music/art preferences." Do you mean to have a recommendation system that feeds you with similar content?
Now, let us talk about the most challenging part: data. Data analysis is inspecting, cleaning, transforming, and conceptualizing data. This practice unfolds through multiple layers of a (generic) iterative data pre-processing pipeline:
data collection and integration (e.g., dealing with multimodal data types),
data cleaning (e.g., missing value imputation),
data transformation (e.g., tokenization and stemming for texts),
feature engineering (e.g., extracting interpretable summaries),
data reduction (e.g., Principal Component Analysis),
data partitioning (e.g., train-validation-test split), and
data augmentation and encoding (e.g., representing categorical variables as numerical ones).
That said, when using LLMs and deep learning architectures in general, you can stop in the pipeline at the 3rd point because deep neural nets are adequate for feature engineering.
If you plan to use streaming data, then use Apache Flink, or if you want to use batch data, then use Apache Spark. I generally use a database for data storage and then query it when needed for downstream tasks, such as training a machine learning model.
I really hope that this answers some of your questions; feel free to follow up if not.
I wondered how people are using Artificial Intelligence and Data Science here.
I want to donate my skills to the @lensprotocol.lens ecosystem or projects built on it. I humbly consider that it has many prospects for the entire web3.
Does anyone need my help? It would be a win-win situation for both sides. I’m trying to grow in the ecosystem, and you could harness my 7+ years of research in the field.
I don’t know (yet) React, JS, Node, et al. I’m trying to learn. But I know a lot about data and machine learning.
I would much appreciate any feedback/mirror!
Short answer: We will always need it.
Long answer: Life is unpredictable. There are considerable situations in which new tech could save many lives: avoiding the death of mothers when giving birth to babies and curing and preventing known and unknown diseases (e.g., cancer, neurological diseases, a new pandemic), among many others. Imagine now an eye surgery. It is known that in the 1700s, Euler got blinded because of many eye surgeries in that period; nevertheless, he still produced numerous mathematical principles that guide our present tech. He would have benefited from (new) tech if he had one, don’t you think?
A researcher’s opinion.
Life’s equation
Happiness is essential in life, and it’s not a constant (e.g., we all experience ups and downs).
Finding the right balance between different aspects of life, such as meaning and purpose, is hard, but this is life’s endeavor.
Also, happiness is subjective, implying people should seek their own and know when they will find it.
#life #purpose #happiness #time #philosophy