{"id":98,"date":"2024-10-17T00:33:42","date_gmt":"2024-10-17T04:33:42","guid":{"rendered":"http:\/\/localhost:4000\/?p=98"},"modified":"2024-10-17T18:01:57","modified_gmt":"2024-10-17T22:01:57","slug":"how-can-i-host-my-ai-locally","status":"publish","type":"post","link":"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/","title":{"rendered":"How can I host my AI locally?"},"content":{"rendered":"\n<p>To host your AI locally using something like <strong>Ollama<\/strong>, here\u2019s a breakdown of the steps from the transcript:<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>1. Hardware Requirements<\/strong><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>You don\u2019t need a super-powerful machine like the AI server called &#8220;Terry&#8221; that was built in the transcript. A regular laptop or desktop running <strong>Windows, Mac, or Linux<\/strong> should suffice.<\/li>\n\n\n\n<li>If you have a <strong>GPU<\/strong>, it will significantly improve performance, especially when handling large models like <strong>Llama 2<\/strong>.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>2. Install Ollama<\/strong><\/h3>\n\n\n\n<p>Ollama is the foundation for running AI models locally.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>For Mac Users:<\/strong>\n<ul class=\"wp-block-list\">\n<li>Head over to <a href=\"https:\/\/ollama.ai\">Ollama<\/a> and download the Mac version, then install it like any regular app.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>For Windows Users:<\/strong><ul><li>You\u2019ll need <strong>Windows Subsystem for Linux (WSL)<\/strong>. Here\u2019s how to set it up:<\/li><\/ul>\n<ol class=\"wp-block-list\">\n<li>Open <strong>Windows Terminal<\/strong> by searching for it in the start menu.<\/li>\n\n\n\n<li>Install WSL by running the following command:<br><code>wsl --install<\/code><\/li>\n\n\n\n<li>Once installed, set up <strong>Ubuntu 22.04 LTS<\/strong> and update your system:<br><code>sudo apt update &amp;&amp; sudo apt upgrade -y<\/code><\/li>\n\n\n\n<li>Install Ollama on Ubuntu using the following command:<br><code>curl https:\/\/ollama.ai\/install.sh | bash<\/code><\/li>\n\n\n\n<li>Mac, Linux, and Windows users will now be on the same track.<\/li>\n<\/ol>\n<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>3. Download an AI Model<\/strong><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>To use <strong>Llama 2<\/strong>, download it using Ollama:<br><code>ollama pull llama2<\/code><\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>4. Run the AI Model<\/strong><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Once installed, you can run the model with:<br><code>ollama run llama2<\/code><\/li>\n\n\n\n<li>The model will now be ready to answer questions entirely offline, on your own machine.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>5. Interact with the Model<\/strong><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Open a new browser window and type <code>localhost:11434<\/code> in the address bar. This will connect you to Ollama\u2019s API service running locally.<\/li>\n\n\n\n<li>You can start interacting with the AI by typing in questions or prompts in your terminal, like:<br><code>What is a pug?<\/code><\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>6. Expand Functionality with a Web Interface<\/strong><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>You can add a graphical user interface (GUI) for easier interaction by setting up <strong>Open Web UI<\/strong>, which provides a beautiful chat interface and additional features like chat history and the ability to switch between multiple models.<\/li>\n\n\n\n<li>This requires Docker to be installed:\n<ol class=\"wp-block-list\">\n<li>Install Docker by running the following commands:<br><code>sudo apt update sudo apt install docker.io<\/code><\/li>\n\n\n\n<li>Run Open Web UI with Docker:<br><code>sudo docker run -d --network=host -e BASE_URL=http:\/\/localhost:11434 -p 8080:8080 openwebui\/openwebui<\/code><\/li>\n\n\n\n<li>Open your browser and go to <code>localhost:8080<\/code> to access the web interface.<\/li>\n<\/ol>\n<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>7. Add Multiple AI Models<\/strong><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Ollama allows you to pull and use multiple models. For example, to add <strong>Code Llama<\/strong>:<br><code>ollama pull codellama<\/code><\/li>\n\n\n\n<li>Switch between models in the Open Web UI interface to try different AI models based on your needs.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>8. Control and Customize<\/strong><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The web interface allows you to add restrictions, such as limiting which models can be used, monitoring users (if it&#8217;s shared), and even creating tailored models with specific behavior (e.g., limiting how the AI responds to your kids\u2019 questions).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>9. Image Generation with Stable Diffusion<\/strong><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>If you want to generate images locally, you can integrate <strong>Stable Diffusion<\/strong> using Docker and the <strong>Automatic 1111<\/strong> interface:\n<ol class=\"wp-block-list\">\n<li>Follow the setup for <strong>Stable Diffusion<\/strong> by installing necessary prerequisites and dependencies.<\/li>\n\n\n\n<li>Run the Stable Diffusion Docker container and connect it with Open Web UI to generate images directly from prompts.<\/li>\n<\/ol>\n<\/li>\n<\/ul>\n\n\n\n<p>By following these steps, you can run powerful AI models, including chatbots and image generators, entirely on your local machine without relying on external servers. The best part is that all data remains private and fully under your control.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe title=\"host ALL your AI locally\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/Wjrdr0NU4Sk?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>To host your AI locally using something like Ollama, here\u2019s a breakdown of the steps from the transcript: 1. Hardware Requirements 2. Install Ollama Ollama is the foundation for running AI models locally. 3. Download an AI Model 4. Run the AI Model 5. Interact with the Model 6. Expand Functionality with a Web Interface [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":99,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[53,4],"tags":[48,47,50,49,8,52,51,6],"class_list":["post-98","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai","category-tech","tag-docker","tag-ollama","tag-open-web-ui","tag-openwebui","tag-server","tag-stable-diffusion","tag-stablediffusion","tag-ubuntu"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.4 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>How can I host my AI locally? - My Tech Talks with ChatGPT<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How can I host my AI locally? - My Tech Talks with ChatGPT\" \/>\n<meta property=\"og:description\" content=\"To host your AI locally using something like Ollama, here\u2019s a breakdown of the steps from the transcript: 1. Hardware Requirements 2. Install Ollama Ollama is the foundation for running AI models locally. 3. Download an AI Model 4. Run the AI Model 5. Interact with the Model 6. Expand Functionality with a Web Interface [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/\" \/>\n<meta property=\"og:site_name\" content=\"My Tech Talks with ChatGPT\" \/>\n<meta property=\"article:published_time\" content=\"2024-10-17T04:33:42+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-10-17T22:01:57+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/zenteno.org\/tech-talks\/wp-content\/uploads\/2024\/10\/aiserverlocal-2.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1280\" \/>\n\t<meta property=\"og:image:height\" content=\"720\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"adminwp\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"adminwp\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/\"},\"author\":{\"name\":\"adminwp\",\"@id\":\"https:\/\/zenteno.org\/tech-talks\/#\/schema\/person\/b6442e8a5e39de0647f2ecf534e18580\"},\"headline\":\"How can I host my AI locally?\",\"datePublished\":\"2024-10-17T04:33:42+00:00\",\"dateModified\":\"2024-10-17T22:01:57+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/\"},\"wordCount\":511,\"publisher\":{\"@id\":\"https:\/\/zenteno.org\/tech-talks\/#\/schema\/person\/b6442e8a5e39de0647f2ecf534e18580\"},\"image\":{\"@id\":\"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/zenteno.org\/tech-talks\/wp-content\/uploads\/2024\/10\/aiserverlocal-2.webp\",\"keywords\":[\"docker\",\"ollama\",\"Open web UI\",\"openwebui\",\"server\",\"Stable Diffusion\",\"stablediffusion\",\"ubuntu\"],\"articleSection\":[\"AI\",\"Technology\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/\",\"url\":\"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/\",\"name\":\"How can I host my AI locally? - My Tech Talks with ChatGPT\",\"isPartOf\":{\"@id\":\"https:\/\/zenteno.org\/tech-talks\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/zenteno.org\/tech-talks\/wp-content\/uploads\/2024\/10\/aiserverlocal-2.webp\",\"datePublished\":\"2024-10-17T04:33:42+00:00\",\"dateModified\":\"2024-10-17T22:01:57+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/#primaryimage\",\"url\":\"https:\/\/zenteno.org\/tech-talks\/wp-content\/uploads\/2024\/10\/aiserverlocal-2.webp\",\"contentUrl\":\"https:\/\/zenteno.org\/tech-talks\/wp-content\/uploads\/2024\/10\/aiserverlocal-2.webp\",\"width\":1280,\"height\":720},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/zenteno.org\/tech-talks\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"How can I host my AI locally?\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/zenteno.org\/tech-talks\/#website\",\"url\":\"https:\/\/zenteno.org\/tech-talks\/\",\"name\":\"My Tech Talks with ChatGPT\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/zenteno.org\/tech-talks\/#\/schema\/person\/b6442e8a5e39de0647f2ecf534e18580\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/zenteno.org\/tech-talks\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":[\"Person\",\"Organization\"],\"@id\":\"https:\/\/zenteno.org\/tech-talks\/#\/schema\/person\/b6442e8a5e39de0647f2ecf534e18580\",\"name\":\"adminwp\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/zenteno.org\/tech-talks\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/zenteno.org\/tech-talks\/wp-content\/uploads\/2024\/10\/IMG_1739.jpg\",\"contentUrl\":\"https:\/\/zenteno.org\/tech-talks\/wp-content\/uploads\/2024\/10\/IMG_1739.jpg\",\"width\":512,\"height\":512,\"caption\":\"adminwp\"},\"logo\":{\"@id\":\"https:\/\/zenteno.org\/tech-talks\/#\/schema\/person\/image\/\"},\"sameAs\":[\"http:\/\/localhost:4000\"],\"url\":\"https:\/\/zenteno.org\/tech-talks\/author\/adminwp\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"How can I host my AI locally? - My Tech Talks with ChatGPT","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/","og_locale":"en_US","og_type":"article","og_title":"How can I host my AI locally? - My Tech Talks with ChatGPT","og_description":"To host your AI locally using something like Ollama, here\u2019s a breakdown of the steps from the transcript: 1. Hardware Requirements 2. Install Ollama Ollama is the foundation for running AI models locally. 3. Download an AI Model 4. Run the AI Model 5. Interact with the Model 6. Expand Functionality with a Web Interface [&hellip;]","og_url":"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/","og_site_name":"My Tech Talks with ChatGPT","article_published_time":"2024-10-17T04:33:42+00:00","article_modified_time":"2024-10-17T22:01:57+00:00","og_image":[{"width":1280,"height":720,"url":"https:\/\/zenteno.org\/tech-talks\/wp-content\/uploads\/2024\/10\/aiserverlocal-2.webp","type":"image\/webp"}],"author":"adminwp","twitter_card":"summary_large_image","twitter_misc":{"Written by":"adminwp","Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/#article","isPartOf":{"@id":"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/"},"author":{"name":"adminwp","@id":"https:\/\/zenteno.org\/tech-talks\/#\/schema\/person\/b6442e8a5e39de0647f2ecf534e18580"},"headline":"How can I host my AI locally?","datePublished":"2024-10-17T04:33:42+00:00","dateModified":"2024-10-17T22:01:57+00:00","mainEntityOfPage":{"@id":"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/"},"wordCount":511,"publisher":{"@id":"https:\/\/zenteno.org\/tech-talks\/#\/schema\/person\/b6442e8a5e39de0647f2ecf534e18580"},"image":{"@id":"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/#primaryimage"},"thumbnailUrl":"https:\/\/zenteno.org\/tech-talks\/wp-content\/uploads\/2024\/10\/aiserverlocal-2.webp","keywords":["docker","ollama","Open web UI","openwebui","server","Stable Diffusion","stablediffusion","ubuntu"],"articleSection":["AI","Technology"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/","url":"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/","name":"How can I host my AI locally? - My Tech Talks with ChatGPT","isPartOf":{"@id":"https:\/\/zenteno.org\/tech-talks\/#website"},"primaryImageOfPage":{"@id":"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/#primaryimage"},"image":{"@id":"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/#primaryimage"},"thumbnailUrl":"https:\/\/zenteno.org\/tech-talks\/wp-content\/uploads\/2024\/10\/aiserverlocal-2.webp","datePublished":"2024-10-17T04:33:42+00:00","dateModified":"2024-10-17T22:01:57+00:00","breadcrumb":{"@id":"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/#primaryimage","url":"https:\/\/zenteno.org\/tech-talks\/wp-content\/uploads\/2024\/10\/aiserverlocal-2.webp","contentUrl":"https:\/\/zenteno.org\/tech-talks\/wp-content\/uploads\/2024\/10\/aiserverlocal-2.webp","width":1280,"height":720},{"@type":"BreadcrumbList","@id":"https:\/\/zenteno.org\/tech-talks\/how-can-i-host-my-ai-locally\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/zenteno.org\/tech-talks\/"},{"@type":"ListItem","position":2,"name":"How can I host my AI locally?"}]},{"@type":"WebSite","@id":"https:\/\/zenteno.org\/tech-talks\/#website","url":"https:\/\/zenteno.org\/tech-talks\/","name":"My Tech Talks with ChatGPT","description":"","publisher":{"@id":"https:\/\/zenteno.org\/tech-talks\/#\/schema\/person\/b6442e8a5e39de0647f2ecf534e18580"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/zenteno.org\/tech-talks\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":["Person","Organization"],"@id":"https:\/\/zenteno.org\/tech-talks\/#\/schema\/person\/b6442e8a5e39de0647f2ecf534e18580","name":"adminwp","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/zenteno.org\/tech-talks\/#\/schema\/person\/image\/","url":"https:\/\/zenteno.org\/tech-talks\/wp-content\/uploads\/2024\/10\/IMG_1739.jpg","contentUrl":"https:\/\/zenteno.org\/tech-talks\/wp-content\/uploads\/2024\/10\/IMG_1739.jpg","width":512,"height":512,"caption":"adminwp"},"logo":{"@id":"https:\/\/zenteno.org\/tech-talks\/#\/schema\/person\/image\/"},"sameAs":["http:\/\/localhost:4000"],"url":"https:\/\/zenteno.org\/tech-talks\/author\/adminwp\/"}]}},"_links":{"self":[{"href":"https:\/\/zenteno.org\/tech-talks\/wp-json\/wp\/v2\/posts\/98","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/zenteno.org\/tech-talks\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/zenteno.org\/tech-talks\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/zenteno.org\/tech-talks\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/zenteno.org\/tech-talks\/wp-json\/wp\/v2\/comments?post=98"}],"version-history":[{"count":2,"href":"https:\/\/zenteno.org\/tech-talks\/wp-json\/wp\/v2\/posts\/98\/revisions"}],"predecessor-version":[{"id":141,"href":"https:\/\/zenteno.org\/tech-talks\/wp-json\/wp\/v2\/posts\/98\/revisions\/141"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/zenteno.org\/tech-talks\/wp-json\/wp\/v2\/media\/99"}],"wp:attachment":[{"href":"https:\/\/zenteno.org\/tech-talks\/wp-json\/wp\/v2\/media?parent=98"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/zenteno.org\/tech-talks\/wp-json\/wp\/v2\/categories?post=98"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/zenteno.org\/tech-talks\/wp-json\/wp\/v2\/tags?post=98"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}