PXAI
v8.42
Feed
Viral
World
Politics
Technology
Daily Briefing
Sources
|
ToS
Version History
Version
Description
Date
8.42
[ADDED SOURCE REGISTRY LIST VIEW]
04/03/2026 16:19
8.41
[LOCALIZED INDEX.PHP MENU AND MODALS TO US ENGLISH]
04/03/2026 16:11
8.40
[TRANSLATED FRONTEND TO US ENGLISH]
04/03/2026 16:10
8.41
[RESTORED DETAILED SOURCES VIEW (ARTICLES + EVENTS) IN US ENGLISH]
04/03/2026 15:50
8.41
[RESTORED DETAILED SOURCES VIEW (ARTICLES + EVENTS) IN US ENGLISH]
04/03/2026 15:46
8.40
[REVERTED TO STANDARD TABLE LAYOUT (US ENGLISH)]
04/03/2026 15:40
8.39
[FIXED DB CONNECTION SCOPE IN SOURCES LOGIC]
04/03/2026 15:38
8.38
[TRANSLATED SOURCES VIEW TO US ENGLISH]
04/03/2026 15:01
8.37
[FULL FRONTEND TRANSLATION (MENU, FEED, COMMENTS, TTS) TO US ENGLISH.]
04/03/2026 14:58
8.67
[MANUAL OVERRIDE OF GREEK MENU ITEMS]
04/03/2026 13:36
8.66
[NGINX OPTIMIZED MENU FIX]
04/03/2026 13:33
8.65
[FIXED GREEK MENU ITEMS]
04/03/2026 13:32
8.60
[MENU & UI LOCALIZATION TO US ENGLISH]
04/03/2026 13:30
8.50
[FULL TRANSLATION TO US ENGLISH]
04/03/2026 13:27
9.75
[INJECTED EVENT FUSION_SUMMARY INTO FEED LOOPS TO DISPLAY AI TAGS CORRECTLY]
26/02/2026 14:44
PXAI Audio Feed
+5
ΟΛΑ
30/03 12:40
dev.to
I Poisoned My Own MCP Server in 5 Minutes. Here's How.
LLM security
tool poisoning
MCP server
malicious instructions
file reading attack
prompt injection
30/03 07:05
dev.to
How I Built an Open-Source LLM Security Library in Python (and What I Learned About Prompt Injection)
LLM security
prompt injection
AI Guardian
open‑source library
Python
GPT‑4
28/03 20:59
dev.to
Securing LangGraph Multi-Agent Workflows: How to Enforce Tool-Level Permissions
LangGraph
multi‑agent
tool‑level permissions
LLM security
prompt injection
delegation
28/03 10:37
dev.to
LLM Security in 2026: The Python Developer's Checklist (What I Learned Getting Burned in Production)
LLM security
prompt injection
Python
OWASP
production AI
attack vectors
26/03 19:29
dev.to
Prompt Injection Isn't a Chatbot Problem Anymore
prompt injection
LLM security
AI agents
tool integration
cybersecurity
pydefend
24/03 16:45
dev.to
Stop Sending Your .env to OpenAI: A Privacy Layer for OpenCode
AI coding agents
.env privacy
LLM security
OpenAI
secrets protection
inference endpoint
23/03 14:18
dev.to
Prompt Injection Defense: The Input Sanitization Patterns That Actually Work
prompt injection
LLM security
input sanitization
defense strategies
user content segmentation
AI safety
18/03 14:49
dev.to
How I Built an AI That Breeds Its Own Jailbreaks Using Genetic Algorithms
AI jailbreak
genetic algorithms
LLM security
adaptive red teaming
safety filter bypass
prompt evolution
18/03 14:49
dev.to
How I Built an AI That Breeds Its Own Jailbreaks Using Genetic Algorithms
AI jailbreak
genetic algorithms
LLM security
adaptive red teaming
safety filter bypass
prompt evolution
Comments
Loading...
Send
Dev Changelog
v8.42
No logs found in database.
0
Display Settings
Size
Aa
Brightness
Theme
Dark
Comments