Jobs Near Me
  • Home
  • Search Jobs
  • Register CV
  • Post a Job
  • Employer Pricing
  • Contact Us
  • Sign in
  • Sign up
  • Home
  • Search Jobs
  • Register CV
  • Post a Job
  • Employer Pricing
  • Contact Us
Sorry, that job is no longer available. Here are some results that may be similar to the job you were looking for.

12 jobs found

Email me jobs like this
Refine Search
Current Search
observability pipeline engineer hybrid
Charles Schwab
Observability Pipeline Engineer - Hybrid
Charles Schwab Austin, Texas
Position Type: Regular Your opportunity At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). This role is responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required. This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus. Duties will include: On-boarding new Kafka producer and consumer use cases. Engineering and supporting the enterprise telemetry pipeline Testing and deploying software upgrades. Managing and supporting telemetry agents. Support of OpenTelemetry collectors Issue troubleshooting and resolution. What you have Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft. Ability to setup and configure on-prem Kafka components, replication factors, and partitioning. E xperience engineering logging platforms Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog. Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication. Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces). Experience working in Linux/Unix, Windows, and virtualized environment. Understanding of cloud environments (AWS, Azure, GCP, and PCF) Familiarity with DNS, Load balancing, and firewalls. Ability to analyze logs to diagnose issues. Experience using other monitoring or analytics tools such as Splunk or Prometheus) Desired: Scripting experience with Python, Bash, Powershell or similar. Desired: Knowledge or experience in high level languages such as Java or Go. In addition to the salary range, this role is also eligible for bonus or incentive opportunities. What's in it for you At Schwab, you're empowered to shape your future. We champion your growth through meaningful work, continuous learning, and a culture of trust and collaboration-so you can build the skills to make a lasting impact. Our Hybrid Work and Flexibility approach balances our ongoing commitment to workplace flexibility, serving our clients, and our strong belief in the value of being together in person on a regular basis. We offer a competitive benefits package that takes care of the whole you - both today and in the future: 401(k) with company match and Employee stock purchase plan Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions Paid parental leave and family building benefits Tuition reimbursement Health, dental, and vision insurance
12/19/2025
Full time
Position Type: Regular Your opportunity At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). This role is responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required. This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus. Duties will include: On-boarding new Kafka producer and consumer use cases. Engineering and supporting the enterprise telemetry pipeline Testing and deploying software upgrades. Managing and supporting telemetry agents. Support of OpenTelemetry collectors Issue troubleshooting and resolution. What you have Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft. Ability to setup and configure on-prem Kafka components, replication factors, and partitioning. E xperience engineering logging platforms Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog. Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication. Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces). Experience working in Linux/Unix, Windows, and virtualized environment. Understanding of cloud environments (AWS, Azure, GCP, and PCF) Familiarity with DNS, Load balancing, and firewalls. Ability to analyze logs to diagnose issues. Experience using other monitoring or analytics tools such as Splunk or Prometheus) Desired: Scripting experience with Python, Bash, Powershell or similar. Desired: Knowledge or experience in high level languages such as Java or Go. In addition to the salary range, this role is also eligible for bonus or incentive opportunities. What's in it for you At Schwab, you're empowered to shape your future. We champion your growth through meaningful work, continuous learning, and a culture of trust and collaboration-so you can build the skills to make a lasting impact. Our Hybrid Work and Flexibility approach balances our ongoing commitment to workplace flexibility, serving our clients, and our strong belief in the value of being together in person on a regular basis. We offer a competitive benefits package that takes care of the whole you - both today and in the future: 401(k) with company match and Employee stock purchase plan Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions Paid parental leave and family building benefits Tuition reimbursement Health, dental, and vision insurance
Charles Schwab
Observability Pipeline Engineer - Hybrid
Charles Schwab Buda, Texas
Position Type: Regular Your opportunity At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). This role is responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required. This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus. Duties will include: On-boarding new Kafka producer and consumer use cases. Engineering and supporting the enterprise telemetry pipeline Testing and deploying software upgrades. Managing and supporting telemetry agents. Support of OpenTelemetry collectors Issue troubleshooting and resolution. What you have Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft. Ability to setup and configure on-prem Kafka components, replication factors, and partitioning. E xperience engineering logging platforms Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog. Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication. Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces). Experience working in Linux/Unix, Windows, and virtualized environment. Understanding of cloud environments (AWS, Azure, GCP, and PCF) Familiarity with DNS, Load balancing, and firewalls. Ability to analyze logs to diagnose issues. Experience using other monitoring or analytics tools such as Splunk or Prometheus) Desired: Scripting experience with Python, Bash, Powershell or similar. Desired: Knowledge or experience in high level languages such as Java or Go. In addition to the salary range, this role is also eligible for bonus or incentive opportunities. What's in it for you At Schwab, you're empowered to shape your future. We champion your growth through meaningful work, continuous learning, and a culture of trust and collaboration-so you can build the skills to make a lasting impact. Our Hybrid Work and Flexibility approach balances our ongoing commitment to workplace flexibility, serving our clients, and our strong belief in the value of being together in person on a regular basis. We offer a competitive benefits package that takes care of the whole you - both today and in the future: 401(k) with company match and Employee stock purchase plan Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions Paid parental leave and family building benefits Tuition reimbursement Health, dental, and vision insurance
12/19/2025
Full time
Position Type: Regular Your opportunity At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). This role is responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required. This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus. Duties will include: On-boarding new Kafka producer and consumer use cases. Engineering and supporting the enterprise telemetry pipeline Testing and deploying software upgrades. Managing and supporting telemetry agents. Support of OpenTelemetry collectors Issue troubleshooting and resolution. What you have Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft. Ability to setup and configure on-prem Kafka components, replication factors, and partitioning. E xperience engineering logging platforms Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog. Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication. Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces). Experience working in Linux/Unix, Windows, and virtualized environment. Understanding of cloud environments (AWS, Azure, GCP, and PCF) Familiarity with DNS, Load balancing, and firewalls. Ability to analyze logs to diagnose issues. Experience using other monitoring or analytics tools such as Splunk or Prometheus) Desired: Scripting experience with Python, Bash, Powershell or similar. Desired: Knowledge or experience in high level languages such as Java or Go. In addition to the salary range, this role is also eligible for bonus or incentive opportunities. What's in it for you At Schwab, you're empowered to shape your future. We champion your growth through meaningful work, continuous learning, and a culture of trust and collaboration-so you can build the skills to make a lasting impact. Our Hybrid Work and Flexibility approach balances our ongoing commitment to workplace flexibility, serving our clients, and our strong belief in the value of being together in person on a regular basis. We offer a competitive benefits package that takes care of the whole you - both today and in the future: 401(k) with company match and Employee stock purchase plan Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions Paid parental leave and family building benefits Tuition reimbursement Health, dental, and vision insurance
Charles Schwab
Observability Pipeline Engineer - Hybrid
Charles Schwab Kyle, Texas
Position Type: Regular Your opportunity At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). This role is responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required. This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus. Duties will include: On-boarding new Kafka producer and consumer use cases. Engineering and supporting the enterprise telemetry pipeline Testing and deploying software upgrades. Managing and supporting telemetry agents. Support of OpenTelemetry collectors Issue troubleshooting and resolution. What you have Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft. Ability to setup and configure on-prem Kafka components, replication factors, and partitioning. E xperience engineering logging platforms Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog. Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication. Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces). Experience working in Linux/Unix, Windows, and virtualized environment. Understanding of cloud environments (AWS, Azure, GCP, and PCF) Familiarity with DNS, Load balancing, and firewalls. Ability to analyze logs to diagnose issues. Experience using other monitoring or analytics tools such as Splunk or Prometheus) Desired: Scripting experience with Python, Bash, Powershell or similar. Desired: Knowledge or experience in high level languages such as Java or Go. In addition to the salary range, this role is also eligible for bonus or incentive opportunities. What's in it for you At Schwab, you're empowered to shape your future. We champion your growth through meaningful work, continuous learning, and a culture of trust and collaboration-so you can build the skills to make a lasting impact. Our Hybrid Work and Flexibility approach balances our ongoing commitment to workplace flexibility, serving our clients, and our strong belief in the value of being together in person on a regular basis. We offer a competitive benefits package that takes care of the whole you - both today and in the future: 401(k) with company match and Employee stock purchase plan Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions Paid parental leave and family building benefits Tuition reimbursement Health, dental, and vision insurance
12/19/2025
Full time
Position Type: Regular Your opportunity At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). This role is responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required. This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus. Duties will include: On-boarding new Kafka producer and consumer use cases. Engineering and supporting the enterprise telemetry pipeline Testing and deploying software upgrades. Managing and supporting telemetry agents. Support of OpenTelemetry collectors Issue troubleshooting and resolution. What you have Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft. Ability to setup and configure on-prem Kafka components, replication factors, and partitioning. E xperience engineering logging platforms Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog. Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication. Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces). Experience working in Linux/Unix, Windows, and virtualized environment. Understanding of cloud environments (AWS, Azure, GCP, and PCF) Familiarity with DNS, Load balancing, and firewalls. Ability to analyze logs to diagnose issues. Experience using other monitoring or analytics tools such as Splunk or Prometheus) Desired: Scripting experience with Python, Bash, Powershell or similar. Desired: Knowledge or experience in high level languages such as Java or Go. In addition to the salary range, this role is also eligible for bonus or incentive opportunities. What's in it for you At Schwab, you're empowered to shape your future. We champion your growth through meaningful work, continuous learning, and a culture of trust and collaboration-so you can build the skills to make a lasting impact. Our Hybrid Work and Flexibility approach balances our ongoing commitment to workplace flexibility, serving our clients, and our strong belief in the value of being together in person on a regular basis. We offer a competitive benefits package that takes care of the whole you - both today and in the future: 401(k) with company match and Employee stock purchase plan Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions Paid parental leave and family building benefits Tuition reimbursement Health, dental, and vision insurance
Charles Schwab
Observability Pipeline Engineer - Hybrid
Charles Schwab Cedar Park, Texas
Position Type: Regular Your opportunity At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). This role is responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required. This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus. Duties will include: On-boarding new Kafka producer and consumer use cases. Engineering and supporting the enterprise telemetry pipeline Testing and deploying software upgrades. Managing and supporting telemetry agents. Support of OpenTelemetry collectors Issue troubleshooting and resolution. What you have Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft. Ability to setup and configure on-prem Kafka components, replication factors, and partitioning. E xperience engineering logging platforms Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog. Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication. Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces). Experience working in Linux/Unix, Windows, and virtualized environment. Understanding of cloud environments (AWS, Azure, GCP, and PCF) Familiarity with DNS, Load balancing, and firewalls. Ability to analyze logs to diagnose issues. Experience using other monitoring or analytics tools such as Splunk or Prometheus) Desired: Scripting experience with Python, Bash, Powershell or similar. Desired: Knowledge or experience in high level languages such as Java or Go. In addition to the salary range, this role is also eligible for bonus or incentive opportunities. What's in it for you At Schwab, you're empowered to shape your future. We champion your growth through meaningful work, continuous learning, and a culture of trust and collaboration-so you can build the skills to make a lasting impact. Our Hybrid Work and Flexibility approach balances our ongoing commitment to workplace flexibility, serving our clients, and our strong belief in the value of being together in person on a regular basis. We offer a competitive benefits package that takes care of the whole you - both today and in the future: 401(k) with company match and Employee stock purchase plan Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions Paid parental leave and family building benefits Tuition reimbursement Health, dental, and vision insurance
12/19/2025
Full time
Position Type: Regular Your opportunity At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). This role is responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required. This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus. Duties will include: On-boarding new Kafka producer and consumer use cases. Engineering and supporting the enterprise telemetry pipeline Testing and deploying software upgrades. Managing and supporting telemetry agents. Support of OpenTelemetry collectors Issue troubleshooting and resolution. What you have Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft. Ability to setup and configure on-prem Kafka components, replication factors, and partitioning. E xperience engineering logging platforms Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog. Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication. Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces). Experience working in Linux/Unix, Windows, and virtualized environment. Understanding of cloud environments (AWS, Azure, GCP, and PCF) Familiarity with DNS, Load balancing, and firewalls. Ability to analyze logs to diagnose issues. Experience using other monitoring or analytics tools such as Splunk or Prometheus) Desired: Scripting experience with Python, Bash, Powershell or similar. Desired: Knowledge or experience in high level languages such as Java or Go. In addition to the salary range, this role is also eligible for bonus or incentive opportunities. What's in it for you At Schwab, you're empowered to shape your future. We champion your growth through meaningful work, continuous learning, and a culture of trust and collaboration-so you can build the skills to make a lasting impact. Our Hybrid Work and Flexibility approach balances our ongoing commitment to workplace flexibility, serving our clients, and our strong belief in the value of being together in person on a regular basis. We offer a competitive benefits package that takes care of the whole you - both today and in the future: 401(k) with company match and Employee stock purchase plan Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions Paid parental leave and family building benefits Tuition reimbursement Health, dental, and vision insurance
Charles Schwab
Observability Pipeline Engineer - Hybrid
Charles Schwab Round Rock, Texas
Position Type: Regular Your opportunity At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). This role is responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required. This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus. Duties will include: On-boarding new Kafka producer and consumer use cases. Engineering and supporting the enterprise telemetry pipeline Testing and deploying software upgrades. Managing and supporting telemetry agents. Support of OpenTelemetry collectors Issue troubleshooting and resolution. What you have Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft. Ability to setup and configure on-prem Kafka components, replication factors, and partitioning. E xperience engineering logging platforms Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog. Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication. Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces). Experience working in Linux/Unix, Windows, and virtualized environment. Understanding of cloud environments (AWS, Azure, GCP, and PCF) Familiarity with DNS, Load balancing, and firewalls. Ability to analyze logs to diagnose issues. Experience using other monitoring or analytics tools such as Splunk or Prometheus) Desired: Scripting experience with Python, Bash, Powershell or similar. Desired: Knowledge or experience in high level languages such as Java or Go. In addition to the salary range, this role is also eligible for bonus or incentive opportunities. What's in it for you At Schwab, you're empowered to shape your future. We champion your growth through meaningful work, continuous learning, and a culture of trust and collaboration-so you can build the skills to make a lasting impact. Our Hybrid Work and Flexibility approach balances our ongoing commitment to workplace flexibility, serving our clients, and our strong belief in the value of being together in person on a regular basis. We offer a competitive benefits package that takes care of the whole you - both today and in the future: 401(k) with company match and Employee stock purchase plan Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions Paid parental leave and family building benefits Tuition reimbursement Health, dental, and vision insurance
12/19/2025
Full time
Position Type: Regular Your opportunity At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). This role is responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required. This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus. Duties will include: On-boarding new Kafka producer and consumer use cases. Engineering and supporting the enterprise telemetry pipeline Testing and deploying software upgrades. Managing and supporting telemetry agents. Support of OpenTelemetry collectors Issue troubleshooting and resolution. What you have Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft. Ability to setup and configure on-prem Kafka components, replication factors, and partitioning. E xperience engineering logging platforms Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog. Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication. Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces). Experience working in Linux/Unix, Windows, and virtualized environment. Understanding of cloud environments (AWS, Azure, GCP, and PCF) Familiarity with DNS, Load balancing, and firewalls. Ability to analyze logs to diagnose issues. Experience using other monitoring or analytics tools such as Splunk or Prometheus) Desired: Scripting experience with Python, Bash, Powershell or similar. Desired: Knowledge or experience in high level languages such as Java or Go. In addition to the salary range, this role is also eligible for bonus or incentive opportunities. What's in it for you At Schwab, you're empowered to shape your future. We champion your growth through meaningful work, continuous learning, and a culture of trust and collaboration-so you can build the skills to make a lasting impact. Our Hybrid Work and Flexibility approach balances our ongoing commitment to workplace flexibility, serving our clients, and our strong belief in the value of being together in person on a regular basis. We offer a competitive benefits package that takes care of the whole you - both today and in the future: 401(k) with company match and Employee stock purchase plan Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions Paid parental leave and family building benefits Tuition reimbursement Health, dental, and vision insurance
Charles Schwab
Observability Pipeline Engineer - Hybrid
Charles Schwab Pflugerville, Texas
Position Type: Regular Your opportunity At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). This role is responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required. This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus. Duties will include: On-boarding new Kafka producer and consumer use cases. Engineering and supporting the enterprise telemetry pipeline Testing and deploying software upgrades. Managing and supporting telemetry agents. Support of OpenTelemetry collectors Issue troubleshooting and resolution. What you have Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft. Ability to setup and configure on-prem Kafka components, replication factors, and partitioning. E xperience engineering logging platforms Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog. Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication. Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces). Experience working in Linux/Unix, Windows, and virtualized environment. Understanding of cloud environments (AWS, Azure, GCP, and PCF) Familiarity with DNS, Load balancing, and firewalls. Ability to analyze logs to diagnose issues. Experience using other monitoring or analytics tools such as Splunk or Prometheus) Desired: Scripting experience with Python, Bash, Powershell or similar. Desired: Knowledge or experience in high level languages such as Java or Go. In addition to the salary range, this role is also eligible for bonus or incentive opportunities. What's in it for you At Schwab, you're empowered to shape your future. We champion your growth through meaningful work, continuous learning, and a culture of trust and collaboration-so you can build the skills to make a lasting impact. Our Hybrid Work and Flexibility approach balances our ongoing commitment to workplace flexibility, serving our clients, and our strong belief in the value of being together in person on a regular basis. We offer a competitive benefits package that takes care of the whole you - both today and in the future: 401(k) with company match and Employee stock purchase plan Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions Paid parental leave and family building benefits Tuition reimbursement Health, dental, and vision insurance
12/19/2025
Full time
Position Type: Regular Your opportunity At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). This role is responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required. This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus. Duties will include: On-boarding new Kafka producer and consumer use cases. Engineering and supporting the enterprise telemetry pipeline Testing and deploying software upgrades. Managing and supporting telemetry agents. Support of OpenTelemetry collectors Issue troubleshooting and resolution. What you have Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft. Ability to setup and configure on-prem Kafka components, replication factors, and partitioning. E xperience engineering logging platforms Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog. Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication. Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces). Experience working in Linux/Unix, Windows, and virtualized environment. Understanding of cloud environments (AWS, Azure, GCP, and PCF) Familiarity with DNS, Load balancing, and firewalls. Ability to analyze logs to diagnose issues. Experience using other monitoring or analytics tools such as Splunk or Prometheus) Desired: Scripting experience with Python, Bash, Powershell or similar. Desired: Knowledge or experience in high level languages such as Java or Go. In addition to the salary range, this role is also eligible for bonus or incentive opportunities. What's in it for you At Schwab, you're empowered to shape your future. We champion your growth through meaningful work, continuous learning, and a culture of trust and collaboration-so you can build the skills to make a lasting impact. Our Hybrid Work and Flexibility approach balances our ongoing commitment to workplace flexibility, serving our clients, and our strong belief in the value of being together in person on a regular basis. We offer a competitive benefits package that takes care of the whole you - both today and in the future: 401(k) with company match and Employee stock purchase plan Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions Paid parental leave and family building benefits Tuition reimbursement Health, dental, and vision insurance
Charles Schwab
Observability Pipeline Engineer - Hybrid
Charles Schwab Leander, Texas
Position Type: Regular Your opportunity At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). This role is responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required. This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus. Duties will include: On-boarding new Kafka producer and consumer use cases. Engineering and supporting the enterprise telemetry pipeline Testing and deploying software upgrades. Managing and supporting telemetry agents. Support of OpenTelemetry collectors Issue troubleshooting and resolution. What you have Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft. Ability to setup and configure on-prem Kafka components, replication factors, and partitioning. E xperience engineering logging platforms Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog. Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication. Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces). Experience working in Linux/Unix, Windows, and virtualized environment. Understanding of cloud environments (AWS, Azure, GCP, and PCF) Familiarity with DNS, Load balancing, and firewalls. Ability to analyze logs to diagnose issues. Experience using other monitoring or analytics tools such as Splunk or Prometheus) Desired: Scripting experience with Python, Bash, Powershell or similar. Desired: Knowledge or experience in high level languages such as Java or Go. In addition to the salary range, this role is also eligible for bonus or incentive opportunities. What's in it for you At Schwab, you're empowered to shape your future. We champion your growth through meaningful work, continuous learning, and a culture of trust and collaboration-so you can build the skills to make a lasting impact. Our Hybrid Work and Flexibility approach balances our ongoing commitment to workplace flexibility, serving our clients, and our strong belief in the value of being together in person on a regular basis. We offer a competitive benefits package that takes care of the whole you - both today and in the future: 401(k) with company match and Employee stock purchase plan Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions Paid parental leave and family building benefits Tuition reimbursement Health, dental, and vision insurance
12/19/2025
Full time
Position Type: Regular Your opportunity At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). This role is responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required. This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus. Duties will include: On-boarding new Kafka producer and consumer use cases. Engineering and supporting the enterprise telemetry pipeline Testing and deploying software upgrades. Managing and supporting telemetry agents. Support of OpenTelemetry collectors Issue troubleshooting and resolution. What you have Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft. Ability to setup and configure on-prem Kafka components, replication factors, and partitioning. E xperience engineering logging platforms Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog. Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication. Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces). Experience working in Linux/Unix, Windows, and virtualized environment. Understanding of cloud environments (AWS, Azure, GCP, and PCF) Familiarity with DNS, Load balancing, and firewalls. Ability to analyze logs to diagnose issues. Experience using other monitoring or analytics tools such as Splunk or Prometheus) Desired: Scripting experience with Python, Bash, Powershell or similar. Desired: Knowledge or experience in high level languages such as Java or Go. In addition to the salary range, this role is also eligible for bonus or incentive opportunities. What's in it for you At Schwab, you're empowered to shape your future. We champion your growth through meaningful work, continuous learning, and a culture of trust and collaboration-so you can build the skills to make a lasting impact. Our Hybrid Work and Flexibility approach balances our ongoing commitment to workplace flexibility, serving our clients, and our strong belief in the value of being together in person on a regular basis. We offer a competitive benefits package that takes care of the whole you - both today and in the future: 401(k) with company match and Employee stock purchase plan Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions Paid parental leave and family building benefits Tuition reimbursement Health, dental, and vision insurance
Charles Schwab
Observability Pipeline Engineer - Hybrid
Charles Schwab San Marcos, Texas
Position Type: Regular Your opportunity At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). This role is responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required. This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus. Duties will include: On-boarding new Kafka producer and consumer use cases. Engineering and supporting the enterprise telemetry pipeline Testing and deploying software upgrades. Managing and supporting telemetry agents. Support of OpenTelemetry collectors Issue troubleshooting and resolution. What you have Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft. Ability to setup and configure on-prem Kafka components, replication factors, and partitioning. E xperience engineering logging platforms Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog. Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication. Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces). Experience working in Linux/Unix, Windows, and virtualized environment. Understanding of cloud environments (AWS, Azure, GCP, and PCF) Familiarity with DNS, Load balancing, and firewalls. Ability to analyze logs to diagnose issues. Experience using other monitoring or analytics tools such as Splunk or Prometheus) Desired: Scripting experience with Python, Bash, Powershell or similar. Desired: Knowledge or experience in high level languages such as Java or Go. In addition to the salary range, this role is also eligible for bonus or incentive opportunities. What's in it for you At Schwab, you're empowered to shape your future. We champion your growth through meaningful work, continuous learning, and a culture of trust and collaboration-so you can build the skills to make a lasting impact. Our Hybrid Work and Flexibility approach balances our ongoing commitment to workplace flexibility, serving our clients, and our strong belief in the value of being together in person on a regular basis. We offer a competitive benefits package that takes care of the whole you - both today and in the future: 401(k) with company match and Employee stock purchase plan Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions Paid parental leave and family building benefits Tuition reimbursement Health, dental, and vision insurance
12/19/2025
Full time
Position Type: Regular Your opportunity At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). This role is responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required. This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus. Duties will include: On-boarding new Kafka producer and consumer use cases. Engineering and supporting the enterprise telemetry pipeline Testing and deploying software upgrades. Managing and supporting telemetry agents. Support of OpenTelemetry collectors Issue troubleshooting and resolution. What you have Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft. Ability to setup and configure on-prem Kafka components, replication factors, and partitioning. E xperience engineering logging platforms Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog. Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication. Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces). Experience working in Linux/Unix, Windows, and virtualized environment. Understanding of cloud environments (AWS, Azure, GCP, and PCF) Familiarity with DNS, Load balancing, and firewalls. Ability to analyze logs to diagnose issues. Experience using other monitoring or analytics tools such as Splunk or Prometheus) Desired: Scripting experience with Python, Bash, Powershell or similar. Desired: Knowledge or experience in high level languages such as Java or Go. In addition to the salary range, this role is also eligible for bonus or incentive opportunities. What's in it for you At Schwab, you're empowered to shape your future. We champion your growth through meaningful work, continuous learning, and a culture of trust and collaboration-so you can build the skills to make a lasting impact. Our Hybrid Work and Flexibility approach balances our ongoing commitment to workplace flexibility, serving our clients, and our strong belief in the value of being together in person on a regular basis. We offer a competitive benefits package that takes care of the whole you - both today and in the future: 401(k) with company match and Employee stock purchase plan Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions Paid parental leave and family building benefits Tuition reimbursement Health, dental, and vision insurance
Charles Schwab
Observability Pipeline Engineer - Hybrid
Charles Schwab Hutto, Texas
Position Type: Regular Your opportunity At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). This role is responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required. This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus. Duties will include: On-boarding new Kafka producer and consumer use cases. Engineering and supporting the enterprise telemetry pipeline Testing and deploying software upgrades. Managing and supporting telemetry agents. Support of OpenTelemetry collectors Issue troubleshooting and resolution. What you have Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft. Ability to setup and configure on-prem Kafka components, replication factors, and partitioning. E xperience engineering logging platforms Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog. Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication. Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces). Experience working in Linux/Unix, Windows, and virtualized environment. Understanding of cloud environments (AWS, Azure, GCP, and PCF) Familiarity with DNS, Load balancing, and firewalls. Ability to analyze logs to diagnose issues. Experience using other monitoring or analytics tools such as Splunk or Prometheus) Desired: Scripting experience with Python, Bash, Powershell or similar. Desired: Knowledge or experience in high level languages such as Java or Go. In addition to the salary range, this role is also eligible for bonus or incentive opportunities. What's in it for you At Schwab, you're empowered to shape your future. We champion your growth through meaningful work, continuous learning, and a culture of trust and collaboration-so you can build the skills to make a lasting impact. Our Hybrid Work and Flexibility approach balances our ongoing commitment to workplace flexibility, serving our clients, and our strong belief in the value of being together in person on a regular basis. We offer a competitive benefits package that takes care of the whole you - both today and in the future: 401(k) with company match and Employee stock purchase plan Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions Paid parental leave and family building benefits Tuition reimbursement Health, dental, and vision insurance
12/19/2025
Full time
Position Type: Regular Your opportunity At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). This role is responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required. This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus. Duties will include: On-boarding new Kafka producer and consumer use cases. Engineering and supporting the enterprise telemetry pipeline Testing and deploying software upgrades. Managing and supporting telemetry agents. Support of OpenTelemetry collectors Issue troubleshooting and resolution. What you have Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft. Ability to setup and configure on-prem Kafka components, replication factors, and partitioning. E xperience engineering logging platforms Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog. Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication. Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces). Experience working in Linux/Unix, Windows, and virtualized environment. Understanding of cloud environments (AWS, Azure, GCP, and PCF) Familiarity with DNS, Load balancing, and firewalls. Ability to analyze logs to diagnose issues. Experience using other monitoring or analytics tools such as Splunk or Prometheus) Desired: Scripting experience with Python, Bash, Powershell or similar. Desired: Knowledge or experience in high level languages such as Java or Go. In addition to the salary range, this role is also eligible for bonus or incentive opportunities. What's in it for you At Schwab, you're empowered to shape your future. We champion your growth through meaningful work, continuous learning, and a culture of trust and collaboration-so you can build the skills to make a lasting impact. Our Hybrid Work and Flexibility approach balances our ongoing commitment to workplace flexibility, serving our clients, and our strong belief in the value of being together in person on a regular basis. We offer a competitive benefits package that takes care of the whole you - both today and in the future: 401(k) with company match and Employee stock purchase plan Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions Paid parental leave and family building benefits Tuition reimbursement Health, dental, and vision insurance
Charles Schwab
Observability Pipeline Engineer - Hybrid
Charles Schwab Taylor, Texas
Position Type: Regular Your opportunity At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). This role is responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required. This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus. Duties will include: On-boarding new Kafka producer and consumer use cases. Engineering and supporting the enterprise telemetry pipeline Testing and deploying software upgrades. Managing and supporting telemetry agents. Support of OpenTelemetry collectors Issue troubleshooting and resolution. What you have Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft. Ability to setup and configure on-prem Kafka components, replication factors, and partitioning. E xperience engineering logging platforms Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog. Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication. Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces). Experience working in Linux/Unix, Windows, and virtualized environment. Understanding of cloud environments (AWS, Azure, GCP, and PCF) Familiarity with DNS, Load balancing, and firewalls. Ability to analyze logs to diagnose issues. Experience using other monitoring or analytics tools such as Splunk or Prometheus) Desired: Scripting experience with Python, Bash, Powershell or similar. Desired: Knowledge or experience in high level languages such as Java or Go. In addition to the salary range, this role is also eligible for bonus or incentive opportunities. What's in it for you At Schwab, you're empowered to shape your future. We champion your growth through meaningful work, continuous learning, and a culture of trust and collaboration-so you can build the skills to make a lasting impact. Our Hybrid Work and Flexibility approach balances our ongoing commitment to workplace flexibility, serving our clients, and our strong belief in the value of being together in person on a regular basis. We offer a competitive benefits package that takes care of the whole you - both today and in the future: 401(k) with company match and Employee stock purchase plan Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions Paid parental leave and family building benefits Tuition reimbursement Health, dental, and vision insurance
12/19/2025
Full time
Position Type: Regular Your opportunity At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). This role is responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required. This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus. Duties will include: On-boarding new Kafka producer and consumer use cases. Engineering and supporting the enterprise telemetry pipeline Testing and deploying software upgrades. Managing and supporting telemetry agents. Support of OpenTelemetry collectors Issue troubleshooting and resolution. What you have Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft. Ability to setup and configure on-prem Kafka components, replication factors, and partitioning. E xperience engineering logging platforms Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog. Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication. Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces). Experience working in Linux/Unix, Windows, and virtualized environment. Understanding of cloud environments (AWS, Azure, GCP, and PCF) Familiarity with DNS, Load balancing, and firewalls. Ability to analyze logs to diagnose issues. Experience using other monitoring or analytics tools such as Splunk or Prometheus) Desired: Scripting experience with Python, Bash, Powershell or similar. Desired: Knowledge or experience in high level languages such as Java or Go. In addition to the salary range, this role is also eligible for bonus or incentive opportunities. What's in it for you At Schwab, you're empowered to shape your future. We champion your growth through meaningful work, continuous learning, and a culture of trust and collaboration-so you can build the skills to make a lasting impact. Our Hybrid Work and Flexibility approach balances our ongoing commitment to workplace flexibility, serving our clients, and our strong belief in the value of being together in person on a regular basis. We offer a competitive benefits package that takes care of the whole you - both today and in the future: 401(k) with company match and Employee stock purchase plan Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions Paid parental leave and family building benefits Tuition reimbursement Health, dental, and vision insurance
Charles Schwab
Observability Pipeline Engineer - Hybrid
Charles Schwab Austin, Texas
Position Type: Regular Your opportunity At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). This role is responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required. This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus. Duties will include: On-boarding new Kafka producer and consumer use cases. Engineering and supporting the enterprise telemetry pipeline Testing and deploying software upgrades. Managing and supporting telemetry agents. Support of OpenTelemetry collectors Issue troubleshooting and resolution. What you have Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft. Ability to setup and configure on-prem Kafka components, replication factors, and partitioning. E xperience engineering logging platforms Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog. Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication. Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces). Experience working in Linux/Unix, Windows, and virtualized environment. Understanding of cloud environments (AWS, Azure, GCP, and PCF) Familiarity with DNS, Load balancing, and firewalls. Ability to analyze logs to diagnose issues. Experience using other monitoring or analytics tools such as Splunk or Prometheus) Desired: Scripting experience with Python, Bash, Powershell or similar. Desired: Knowledge or experience in high level languages such as Java or Go. In addition to the salary range, this role is also eligible for bonus or incentive opportunities. What's in it for you At Schwab, you're empowered to shape your future. We champion your growth through meaningful work, continuous learning, and a culture of trust and collaboration-so you can build the skills to make a lasting impact. Our Hybrid Work and Flexibility approach balances our ongoing commitment to workplace flexibility, serving our clients, and our strong belief in the value of being together in person on a regular basis. We offer a competitive benefits package that takes care of the whole you - both today and in the future: 401(k) with company match and Employee stock purchase plan Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions Paid parental leave and family building benefits Tuition reimbursement Health, dental, and vision insurance
12/19/2025
Full time
Position Type: Regular Your opportunity At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). This role is responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required. This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus. Duties will include: On-boarding new Kafka producer and consumer use cases. Engineering and supporting the enterprise telemetry pipeline Testing and deploying software upgrades. Managing and supporting telemetry agents. Support of OpenTelemetry collectors Issue troubleshooting and resolution. What you have Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft. Ability to setup and configure on-prem Kafka components, replication factors, and partitioning. E xperience engineering logging platforms Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog. Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication. Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces). Experience working in Linux/Unix, Windows, and virtualized environment. Understanding of cloud environments (AWS, Azure, GCP, and PCF) Familiarity with DNS, Load balancing, and firewalls. Ability to analyze logs to diagnose issues. Experience using other monitoring or analytics tools such as Splunk or Prometheus) Desired: Scripting experience with Python, Bash, Powershell or similar. Desired: Knowledge or experience in high level languages such as Java or Go. In addition to the salary range, this role is also eligible for bonus or incentive opportunities. What's in it for you At Schwab, you're empowered to shape your future. We champion your growth through meaningful work, continuous learning, and a culture of trust and collaboration-so you can build the skills to make a lasting impact. Our Hybrid Work and Flexibility approach balances our ongoing commitment to workplace flexibility, serving our clients, and our strong belief in the value of being together in person on a regular basis. We offer a competitive benefits package that takes care of the whole you - both today and in the future: 401(k) with company match and Employee stock purchase plan Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions Paid parental leave and family building benefits Tuition reimbursement Health, dental, and vision insurance
Cognizant
Principal Architect - Gen AI & Agentic Systems (Hybrid)
Cognizant Minneapolis, Minnesota
Principal Architect - Gen AI & Agentic Systems Job ID: Location: Phoenix, AZ or Minneapolis, MN (Hybrid - 2 to 3 days/week in office) Employment Type: Full-Time About the role As a Gen AI and Agentic AI Architect , you will lead the design and deployment of scalable AI ecosystems for Cognizant's strategic clients. You'll drive AI strategy, build modular platforms, and deliver industry-specific solutions that transform enterprise operations. In this role, you will: Architect cloud-native AI platforms using LLMs, SLMs, and multi-agent orchestration. Advise Fortune 500 clients on AI strategy and transformation. Deliver verticalized AI use cases across industries. Lead model development, fine-tuning, and optimization. Establish MLOps/LLMOps pipelines and governance frameworks. Build and mentor AI teams and practices. Co-innovate with hyperscalers, startups, and ISVs. Contribute to thought leadership through publications and forums. Work model This is a hybrid position requiring 2 to 3 days/week in a Cognizant or client office in Phoenix, AZ or Minneapolis, MN . We support flexible work arrangements and a healthy work-life balance through our wellbeing programs. What you need to have to be considered 15+ years in IT and architecture, including hands-on engineering experience. 5+ years in AI/ML, with 1+ year in Generative & Agentic AI. Expertise in model training (SFT, RLHF, LoRA), RAG, and evaluation. Certifications in at least two cloud platforms (AWS, Azure, GCP). Strong background in MLOps/LLMOps and AI governance. Experience advising CxOs and leading strategic AI engagements. Proven leadership in building cross-functional AI teams. These will help you stand out Publications or patents in Agentic AI or LLMOps. Thought leadership in industry events or media. Deep domain expertise in one or more verticals. Experience with AgentOps, model evaluation, and AI observability tools. Salary and Other Compensation: Applicants will be accepted till 1/08/2026 Cognizant will only consider applicants for this position who are legally authorized to work in the United States without company sponsorship. Please note, this role is not able to offer visa transfer or sponsorship now or in the future The annual salary for this position will be in the range of $120K-$165K depending on experience and other qualifications of the successful candidate. This position is also eligible for Cognizant's discretionary annual incentive program, based on performance and subject to the terms of Cognizant's applicable plans. Benefits : Cognizant offers the following benefits for this position, subject to applicable eligibility requirements: Medical/Dental/Vision/Life Insurance Paid holidays plus Paid Time Off 401(k) plan and contributions Long-term/Short-term Disability Paid Parental Leave Employee Stock Purchase Plan Disclaimer: The salary, other compensation, and benefits information is accurate as of the date of this posting. Cognizant reserves the right to modify this information at any time, subject to applicable law. Our strength is built on our ability to work together. Our diverse backgrounds offer different perspectives and new ways of thinking. It encourages lively discussions, creativity, productivity, and helps us build better solutions for our clients. We want someone who thrives in this setting and is inspired to craft meaningful solutions through true collaboration. If you are content with ambiguity, excited by change, and excel through autonomy, we'd love to hear from you! Apply Now!
12/15/2025
Full time
Principal Architect - Gen AI & Agentic Systems Job ID: Location: Phoenix, AZ or Minneapolis, MN (Hybrid - 2 to 3 days/week in office) Employment Type: Full-Time About the role As a Gen AI and Agentic AI Architect , you will lead the design and deployment of scalable AI ecosystems for Cognizant's strategic clients. You'll drive AI strategy, build modular platforms, and deliver industry-specific solutions that transform enterprise operations. In this role, you will: Architect cloud-native AI platforms using LLMs, SLMs, and multi-agent orchestration. Advise Fortune 500 clients on AI strategy and transformation. Deliver verticalized AI use cases across industries. Lead model development, fine-tuning, and optimization. Establish MLOps/LLMOps pipelines and governance frameworks. Build and mentor AI teams and practices. Co-innovate with hyperscalers, startups, and ISVs. Contribute to thought leadership through publications and forums. Work model This is a hybrid position requiring 2 to 3 days/week in a Cognizant or client office in Phoenix, AZ or Minneapolis, MN . We support flexible work arrangements and a healthy work-life balance through our wellbeing programs. What you need to have to be considered 15+ years in IT and architecture, including hands-on engineering experience. 5+ years in AI/ML, with 1+ year in Generative & Agentic AI. Expertise in model training (SFT, RLHF, LoRA), RAG, and evaluation. Certifications in at least two cloud platforms (AWS, Azure, GCP). Strong background in MLOps/LLMOps and AI governance. Experience advising CxOs and leading strategic AI engagements. Proven leadership in building cross-functional AI teams. These will help you stand out Publications or patents in Agentic AI or LLMOps. Thought leadership in industry events or media. Deep domain expertise in one or more verticals. Experience with AgentOps, model evaluation, and AI observability tools. Salary and Other Compensation: Applicants will be accepted till 1/08/2026 Cognizant will only consider applicants for this position who are legally authorized to work in the United States without company sponsorship. Please note, this role is not able to offer visa transfer or sponsorship now or in the future The annual salary for this position will be in the range of $120K-$165K depending on experience and other qualifications of the successful candidate. This position is also eligible for Cognizant's discretionary annual incentive program, based on performance and subject to the terms of Cognizant's applicable plans. Benefits : Cognizant offers the following benefits for this position, subject to applicable eligibility requirements: Medical/Dental/Vision/Life Insurance Paid holidays plus Paid Time Off 401(k) plan and contributions Long-term/Short-term Disability Paid Parental Leave Employee Stock Purchase Plan Disclaimer: The salary, other compensation, and benefits information is accurate as of the date of this posting. Cognizant reserves the right to modify this information at any time, subject to applicable law. Our strength is built on our ability to work together. Our diverse backgrounds offer different perspectives and new ways of thinking. It encourages lively discussions, creativity, productivity, and helps us build better solutions for our clients. We want someone who thrives in this setting and is inspired to craft meaningful solutions through true collaboration. If you are content with ambiguity, excited by change, and excel through autonomy, we'd love to hear from you! Apply Now!

Modal Window

  • Blog
  • Contact
  • About Us
  • Terms & Conditions
  • Privacy
  • Employer
  • Post a Job
  • Search Resumes
  • Sign in
  • Job Seeker
  • Find Jobs
  • Create Resume
  • Sign in
  • Facebook
  • Twitter
  • Instagram
  • Pinterest
  • Youtube
Parent and Partner sites: IT Job Board | Search Jobs Near Me | RightTalent.co.uk | Quantity Surveyor jobs | Building Surveyor jobs | Construction Recruitment | Talent Recruiter | London Jobs | Property jobs
© 2008-2025 My Jobs Near Me