\n"
      ]
     }
    ],
    "source": [
-    "# Get all sidemenu links as a list\n",
-    "side_menu_links = soup.select(\"a.sidemenu\")\n",
+    "# obtener elementos con el selector CSS \"a.dropdown-item\"\n",
+    "side_menu_links = soup.select(\"a.dropdown-item\")\n",
     "\n",
-    "# Examine the first link\n",
+    "# examinar el primer elemento\n",
     "first_link = side_menu_links[0]\n",
-    "print(first_link)\n",
     "\n",
-    "# What class is this variable?\n",
-    "print('Class: ', type(first_link))"
+    "# obtener el texto del enlace\n",
+    "print(first_link.text)\n",
+    "\n",
+    "# cuál clase de objeto es?\n",
+    "print('Class: ', type(first_link))\n"
    ]
   },
   {
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "¡Es una etiqueta de Beautiful Soup! Esto significa que tiene un miembro \"texto\":"
+    "It's a Beautiful Soup tag! This means it has a `text` member:"
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": null,
+   "execution_count": 154,
    "metadata": {
     "tags": []
    },
-   "outputs": [],
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "\n",
+      " English\n",
+      "                            \n"
+     ]
+    }
+   ],
    "source": [
     "print(first_link.text)"
    ]
@@ -547,18 +1870,26 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "A veces necesitamos el valor de ciertos atributos. Esto es especialmente relevante para las etiquetas «a» o enlaces, donde el atributo «href» nos indica adónde lleva el enlace.\n",
+    "Sometimes we want the value of certain attributes. This is particularly relevant for `a` tags, or links, where the `href` attribute tells us where the link goes.\n",
     "\n",
-    "💡 **Consejo**: Puedes acceder a los atributos de una etiqueta tratándola como un diccionario:"
+    "💡 **Tip**: You can access a tag’s attributes by treating the tag like a dictionary:"
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": null,
+   "execution_count": 155,
    "metadata": {
     "tags": []
    },
-   "outputs": [],
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "#\n"
+     ]
+    }
+   ],
    "source": [
     "print(first_link['href'])"
    ]
@@ -569,16 +1900,21 @@
    "source": [
     "## 🥊 Desafío: Extraer atributos específicos\n",
     "\n",
-    "Extraer todos los atributos `href` de cada URL `mainmenu`."
+    "Extrae todos los atributos `href` de cada URL `mainmenu`."
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": null,
+   "execution_count": 156,
    "metadata": {},
    "outputs": [],
    "source": [
-    "# YOUR CODE HERE\n"
+    "# YOUR CODE HERE\n",
+    "# Extraer todos los atributos href de cada enlace con la clase 'mainmenu'\n",
+    "mainmenu_links = soup.select(\"a.mainmenu\")\n",
+    "\n",
+    "for link in mainmenu_links:\n",
+    "    print(link.get('href'))\n"
    ]
   },
   {
@@ -589,7 +1925,7 @@
     "\n",
     "# Análisis de la Asamblea General de Illinois\n",
     "\n",
-    "Aunque parezca increíble, estas son las herramientas fundamentales para analizar un sitio web. Una vez que dediques más tiempo a familiarizarte con HTML y CSS, simplemente será cuestión de comprender la estructura de un sitio web específico y aplicar inteligentemente las herramientas de Beautiful Soup y Python.\n",
+    "Aunque parezca increíble, estas son las herramientas fundamentales para analizar un sitio web. Una vez que dediques más tiempo a familiarizarte con HTML y CSS, solo tendrás que comprender la estructura de un sitio web específico y aplicar con inteligencia las herramientas de Beautiful Soup y Python.\n",
     "\n",
     "Apliquemos estas habilidades para analizar la [98.ª Asamblea General de Illinois](http://www.ilga.gov/senate/default.asp?GA=98).\n",
     "\n",
@@ -600,24 +1936,24 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Rastrear y analizar la página web\n",
+    "## Analizar la página web\n",
     "\n",
-    "Rastreemos y analicemos la página web con las herramientas que aprendimos en la sección anterior."
+    "Analicemos la página web usando las herramientas que aprendimos en la sección anterior."
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": null,
+   "execution_count": 157,
    "metadata": {
     "tags": []
    },
    "outputs": [],
    "source": [
-    "# Make a GET request\n",
-    "req = requests.get('http://www.ilga.gov/senate/default.asp?GA=98')\n",
-    "# Read the content of the server’s response\n",
+    "# Hacemos una nueva solicitud a otra página\n",
+    "req = requests.get('https://www.ilga.gov/Senate/Members/rptMemberList')\n",
+    "# leer el contenido de la respuesta del servidor\n",
     "src = req.text\n",
-    "# Soup it\n",
+    "# analiza la respuesta y conviértela en un árbol HTML.\n",
     "soup = BeautifulSoup(src, \"lxml\")"
    ]
   },
@@ -632,11 +1968,22 @@
   },
   {
    "cell_type": "code",
-   "execution_count": null,
+   "execution_count": 158,
    "metadata": {},
-   "outputs": [],
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "60"
+      ]
+     },
+     "execution_count": 158,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
    "source": [
-    "# Get all table row elements\n",
+    "# obtener todas las filas de la tabla\n",
     "rows = soup.find_all(\"tr\")\n",
     "len(rows)"
    ]
@@ -650,15 +1997,15 @@
   },
   {
    "cell_type": "code",
-   "execution_count": null,
+   "execution_count": 159,
    "metadata": {},
    "outputs": [],
    "source": [
-    "# Returns every ‘tr tr tr’ css selector in the page\n",
-    "rows = soup.select('tr tr tr')\n",
+    "# retornar solo las filas que están dentro de otra fila\n",
+    "rows = soup.select('a.dropdown-item')\n",
     "\n",
-    "for row in rows[:5]:\n",
-    "    print(row, '\\n')"
+    "for row in rows[:20]:\n",
+    "    print(row, '\\n')\n"
    ]
   },
   {
@@ -670,12 +2017,42 @@
   },
   {
    "cell_type": "code",
-   "execution_count": null,
+   "execution_count": 160,
    "metadata": {},
-   "outputs": [],
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "0\n",
+      "[]\n"
+     ]
+    }
+   ],
    "source": [
-    "example_row = rows[2]\n",
-    "print(example_row.prettify())"
+    "print(len(rows))\n",
+    "print(rows)\n"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 161,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "No rows found.\n"
+     ]
+    }
+   ],
+   "source": [
+    "if rows:\n",
+    "\texample_row = rows[0]\n",
+    "\tprint(example_row.prettify())\n",
+    "else:\n",
+    "\tprint(\"No rows found.\")"
    ]
   },
   {
@@ -691,21 +2068,93 @@
   },
   {
    "cell_type": "code",
-   "execution_count": null,
+   "execution_count": 162,
    "metadata": {},
    "outputs": [],
    "source": [
-    "for cell in example_row.select('td'):\n",
-    "    print(cell)\n",
-    "print()\n",
+    "# Parse the HTML to get the rows\n",
+    "soup = BeautifulSoup(src, \"lxml\")\n",
+    "rows = soup.select(\"tbody tr\")\n",
     "\n",
-    "for cell in example_row.select('.detail'):\n",
-    "    print(cell)\n",
-    "print()\n",
+    "if rows:\n",
+    "\texample_row = rows[0]\n",
+    "else:\n",
+    "\texample_row = None\n",
+    "\tprint(\"No rows found.\")\n",
+    "  "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 163,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "\n",
+      " | \n",
+      "  \n",
+      "   Neil Anderson\n",
+      "  \n",
+      "  (R)\n",
+      " \n",
+      "\n",
+      "  47th District\n",
+      "
 | \n",
+      "  208 A Capitol Building\n",
+      " \n",
+      "\n",
+      "
 \n",
+      "  Springfield, IL 62706\n",
+      "
 \n",
+      "  (217) 782-5957\n",
+      "
 | \n",
+      "  103 North College Avenue\n",
+      " \n",
+      "\n",
+      "  #201\n",
+      "
 \n",
+      "  Aledo IL 61231\n",
+      "
 \n",
+      "  (309) 230-7584\n",
+      "
 | 
\n",
+      "\n",
+      "\n",
+      "Neil Anderson (R)\n",
+      " \n",
+      "\n",
+      "                                    47th District\n",
+      "
 | \n",
+      "                                    208 A Capitol Building \n",
+      "\n",
+      "
 \n",
+      "                                    Springfield, IL 62706
 \n",
+      "                                    (217) 782-5957\n",
+      "                                    \n",
+      "
 | 103 North College Avenue \n"
+     ]
+    }
+   ],
+   "source": [
+    "if rows:\n",
+    "    example_row = rows[0]\n",
+    "    print(example_row.prettify())\n",
     "\n",
-    "for cell in example_row.select('td.detail'):\n",
-    "    print(cell)\n",
-    "print()"
+    "    # Aquí procesa example_row solo si existe\n",
+    "    for cell in example_row.select('td'):\n",
+    "        print(cell)\n",
+    "    for cell in example_row.select('.detail'):\n",
+    "        print(cell)\n",
+    "    for cell in example_row.select('td.detail'):\n",
+    "        print(cell)\n",
+    "else:\n",
+    "    print(\"No rows found.\")\n"
    ]
   },
   {
@@ -717,20 +2166,40 @@
   },
   {
    "cell_type": "code",
-   "execution_count": null,
+   "execution_count": 167,
    "metadata": {
     "tags": []
    },
-   "outputs": [],
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "tds: 3\n",
+      "details: 0\n",
+      "td.details: 0\n"
+     ]
+    }
+   ],
    "source": [
-    "assert example_row.select('td') == example_row.select('.detail') == example_row.select('td.detail')"
+    "# revisamos cuántos elementos encuentra cada método\n",
+    "tds = example_row.select('td')\n",
+    "details = example_row.select('.detail')\n",
+    "td_details = example_row.select('td.detail')\n",
+    "\n",
+    "print(\"tds:\", len(tds))\n",
+    "print(\"details:\", len(details))\n",
+    "print(\"td.details:\", len(td_details))\n",
+    "\n",
+    "# solo para verificar que todos los .detail están en\n",
+      "                                    #201
 \n",
+      "                                    Aledo IL 61231
 \n",
+      "                                    (309) 230-7584\n",
+      "
 | \n",
+    "assert td_details == details  # estos deben de ser iguales"
    ]
   },
   {
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "Utilicemos el selector `td.detail` para ser lo más específicos posible."
+    "Let's use the selector `td.detail` to be as specific as possible."
    ]
   },
   {
@@ -748,7 +2217,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "La mayoría de las veces, nos interesa el **texto** real de un sitio web, no sus etiquetas. Recordemos que para obtener el texto de un elemento HTML, usamos el miembro `text`:"
+    "Most of the time, we're interested in the actual **text** of a website, not its tags. Recall that to get the text of an HTML element, we use the `text` member:"
    ]
   },
   {
@@ -767,7 +2236,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "¡Se ve bien! Ahora solo necesitamos usar nuestros conocimientos básicos de Python para obtener los elementos de esta lista que necesitamos. Recuerda: queremos el nombre del senador, su distrito y su partido."
+    "Looks good! Now we just use our basic Python knowledge to get the elements of this list that we want. Remember, we want the senator's name, their district, and their party."
    ]
   },
   {
@@ -785,9 +2254,9 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Eliminando filas basura\n",
+    "## Getting Rid of Junk Rows\n",
     "\n",
-    "Vimos al principio que no todas las filas que obtuvimos corresponden a un senador. Tendremos que hacer limpieza antes de continuar. Vean algunos ejemplos:"
+    "We saw at the beginning that not all of the rows we got actually correspond to a senator. We'll need to do some cleaning before we can proceed forward. Take a look at some examples:"
    ]
   },
   {
@@ -805,9 +2274,9 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "Al escribir nuestro bucle for, queremos que solo se aplique a las filas relevantes. Por lo tanto, debemos filtrar las filas irrelevantes. Para ello, comparamos algunas de estas filas con las que necesitamos, observamos sus diferencias y luego formulamos esto en una condición.\n",
+    "When we write our for loop, we only want it to apply to the relevant rows. So we'll need to filter out the irrelevant rows. The way to do this is to compare some of these to the rows we do want, see how they differ, and then formulate that in a conditional.\n",
     "\n",
-    "Como puedes imaginar, hay muchas maneras de hacerlo, y dependerá del sitio web. Aquí te mostraremos algunas para que te hagas una idea de cómo hacerlo."
+    "As you can imagine, there a lot of possible ways to do this, and it'll depend on the website. We'll show some here to give you an idea of how to do this."
    ]
   },
   {
@@ -829,7 +2298,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "Quizás las buenas filas tengan una longitud de 5. Comprobémoslo:"
+    "Perhaps good rows have a length of 5. Let's check:"
    ]
   },
   {
@@ -850,7 +2319,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "Encontramos una fila de pie de página en nuestra lista que queremos evitar. Probemos algo diferente:"
+    "We found a footer row in our list that we'd like to avoid. Let's try something else:"
    ]
   },
   {
@@ -886,16 +2355,16 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "¡Parece que encontramos algo que funcionó!"
+    "Looks like we found something that worked!"
    ]
   },
   {
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Unir todo en un bucle\n",
+    "## Loop it All Together\n",
     "\n",
-    "Ahora que hemos visto cómo obtener los datos que queremos de una fila y filtrar las filas que no necesitamos, vamos a unirlo todo en un bucle."
+    "Now that we've seen how to get the data we want from one row, as well as filter out the rows we don't want, let's put it all together into a loop."
    ]
   },
   {
@@ -942,7 +2411,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "Echemos un vistazo a lo que tenemos en \"miembros\"."
+    "Let's take a look at what we have in `members`."
    ]
   },
   {
@@ -958,37 +2427,37 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 🥊  Desafío: Obtener elementos `href` que apunten a los proyectos de ley de los miembros\n",
+    "## 🥊  Challenge: Get `href` elements pointing to members' bills \n",
     "\n",
-    "El código anterior recupera información sobre:\n",
+    "The code above retrieves information on:  \n",
     "\n",
-    "- el nombre del senador,\n",
-    "- su número de distrito,\n",
-    "- y su partido.\n",
+    "- the senator's name,\n",
+    "- their district number,\n",
+    "- and their party.\n",
     "\n",
-    "Ahora queremos recuperar la URL de la lista de proyectos de ley de cada senador. Cada URL seguirá un formato específico.\n",
+    "We now want to retrieve the URL for each senator's list of bills. Each URL will follow a specific format. \n",
     "\n",
-    "El formato de la lista de proyectos de ley de un senador determinado es:\n",
+    "The format for the list of bills for a given senator is:\n",
     "\n",
     "`http://www.ilga.gov/senate/SenatorBills.asp?GA=98&MemberID=[MEMBER_ID]&Primary=True`\n",
     "\n",
-    "para obtener algo como:\n",
+    "to get something like:\n",
     "\n",
     "`http://www.ilga.gov/senate/SenatorBills.asp?MemberID=1911&GA=98&Primary=True`\n",
     "\n",
-    "en el cual `MEMBER_ID=1911`. \n",
+    "in which `MEMBER_ID=1911`. \n",
     "\n",
-    "Deberías poder ver que, lamentablemente, `MEMBER_ID` no se extrae actualmente en nuestro código de extracción.\n",
+    "You should be able to see that, unfortunately, `MEMBER_ID` is not currently something pulled out in our scraping code.\n",
     "\n",
-    "Tu tarea inicial es modificar el código anterior para que también **recuperemos la URL completa que apunta a la página correspondiente de los proyectos de ley patrocinados por las primarias**, para cada miembro, y la devolvamos junto con su nombre, distrito y partido.\n",
+    "Your initial task is to modify the code above so that we also **retrieve the full URL which points to the corresponding page of primary-sponsored bills**, for each member, and return it along with their name, district, and party.\n",
     "\n",
-    "Consejos:\n",
+    "Tips: \n",
     "\n",
-    "* Para ello, deberás obtener el elemento de anclaje apropiado (``) en la fila de la tabla de cada legislador. Puedes usar el método `.select()` en el objeto `row` del bucle para hacerlo, similar al comando que encuentra todas las celdas `td.detail` de la fila. Recuerda que solo queremos el enlace a los proyectos de ley del legislador, no a los comités ni a su página de perfil.\n",
-    "* El HTML de los elementos de anclaje se verá como `Proyectos de ley`. La cadena del atributo `href` contiene el enlace **relativo** que buscamos. Puedes acceder a un atributo de un objeto `Tag` de BeatifulSoup de la misma manera que accedes a un diccionario de Python: `anchor['attributeName']`. Consulta la documentación para más detalles.\n",
-    "* Hay muchas maneras diferentes de usar BeautifulSoup. Puedes hacer lo que necesites para extraer el `href`.\n",
+    "* To do this, you will want to get the appropriate anchor element (``) in each legislator's row of the table. You can again use the `.select()` method on the `row` object in the loop to do this — similar to the command that finds all of the `td.detail` cells in the row. Remember that we only want the link to the legislator's bills, not the committees or the legislator's profile page.\n",
+    "* The anchor elements' HTML will look like `Bills`. The string in the `href` attribute contains the **relative** link we are after. You can access an attribute of a BeatifulSoup `Tag` object the same way you access a Python dictionary: `anchor['attributeName']`. See the documentation for more details.\n",
+    "* There are a _lot_ of different ways to use BeautifulSoup to get things done. whatever you need to do to pull the `href` out is fine.\n",
     "\n",
-    "El código se ha completado parcialmente. Complétalo donde dice `#TU CÓDIGO AQUÍ`. Guarda la ruta en un objeto llamado `full_path`."
+    "The code has been partially filled out for you. Fill it in where it says `#YOUR CODE HERE`. Save the path into an object called `full_path`."
    ]
   },
   {
@@ -1049,9 +2518,9 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 🥊  Desafío: Modulariza tu código\n",
+    "## 🥊  Challenge: Modularize Your Code\n",
     "\n",
-    "Convierte el código anterior en una función que acepte una URL, rastree la URL para encontrar sus senadores y devuelva una lista de tuplas con información sobre cada senador."
+    "Turn the code above into a function that accepts a URL, scrapes the URL for its senators, and returns a list of tuples containing information about each senator. "
    ]
   },
   {
@@ -1085,21 +2554,21 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## 🥊Desafío práctico: Escribir una función de scraping\n",
-    "\n",
-    "Queremos scraping las páginas web correspondientes a los proyectos de ley patrocinados por cada proyecto de ley.\n",
+    "## 🥊 Take-home Challenge: Writing a Scraper Function\n",
     "\n",
-    "Escribir una función llamada `get_bills(url)` para analizar la URL de un proyecto de ley. Esto implica:\n",
+    "We want to scrape the webpages corresponding to bills sponsored by each bills.\n",
     "\n",
-    "- Solicitar la URL mediante la biblioteca `requests`\n",
-    "- Usar las funciones de la biblioteca `BeautifulSoup` para encontrar todos los elementos ` | ` con la clase `billlist`\n",
-    "- Devolver una _lista_ de tuplas, cada una con:\n",
-    "- Descripción (2.ª columna)\n",
-    "- Cámara (S o H) (3.ª columna)\n",
-    "- La última acción (4.ª columna)\n",
-    "- La fecha de la última acción (5.ª columna)\n",
+    "Write a function called `get_bills(url)` to parse a given bills URL. This will involve:\n",
     "\n",
-    "Esta función se ha completado parcialmente. Complete el resto."
+    "  - requesting the URL using the `requests` library\n",
+    "  - using the features of the `BeautifulSoup` library to find all of the ` | ` elements with the class `billlist`\n",
+    "  - return a _list_ of tuples, each with:\n",
+    "      - description (2nd column)\n",
+    "      - chamber (S or H) (3rd column)\n",
+    "      - the last action (4th column)\n",
+    "      - the last action date (5th column)\n",
+    "      \n",
+    "This function has been partially completed. Fill in the rest."
    ]
   },
   {
@@ -1117,11 +2586,11 @@
     "    bills = []\n",
     "    for row in rows:\n",
     "        # YOUR CODE HERE\n",
-    "       # bill_id =\n",
-    "        #description =\n",
-    "        #chamber =\n",
-    "        #last_action =\n",
-    "        #last_action_date =\n",
+    "        bill_id =\n",
+    "        description =\n",
+    "        chamber =\n",
+    "        last_action =\n",
+    "        last_action_date =\n",
     "        bill = (bill_id, description, chamber, last_action, last_action_date)\n",
     "        bills.append(bill)\n",
     "    return bills"
@@ -1144,11 +2613,11 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "### Extraer todos los proyectos de ley\n",
+    "### Scrape All Bills\n",
     "\n",
-    "Finalmente, cree un diccionario `bills_dict` que asigne un número de distrito (la clave) a una lista de proyectos de ley (el valor) provenientes de ese distrito. Puede hacerlo recorriendo en bucle todos los miembros del senado en `members_dict` y llamando a `get_bills()` para cada una de las URL de sus proyectos de ley asociados.\n",
+    "Finally, create a dictionary `bills_dict` which maps a district number (the key) onto a list of bills (the value) coming from that district. You can do this by looping over all of the senate members in `members_dict` and calling `get_bills()` for each of their associated bill URLs.\n",
     "\n",
-    "**NOTA:** Por favor, llame a la función `time.sleep(1)` en cada iteración del bucle para no destruir el sitio web del estado."
+    "**NOTE:** please call the function `time.sleep(1)` for each iteration of the loop, so that we don't destroy the state's web site."
    ]
   },
   {
@@ -1178,7 +2647,7 @@
  "metadata": {
   "anaconda-cloud": {},
   "kernelspec": {
-   "display_name": ".venv312",
+   "display_name": "Python 3",
    "language": "python",
    "name": "python3"
   },
@@ -1192,7 +2661,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.12.10"
+   "version": "3.13.6"
   }
  },
  "nbformat": 4, |