Artificial Intelligence

Here at Envestnet, AI efforts are powered by our mission to bring to market the most innovative technology for the future of wealth management. We are optimistic about the potential of AI and its ability to help us to better surface client needs, personalize insights and save time while driving business. At the same time, we recognize that evolving technologies can raise novel challenges and risks. Our approach to AI is underpinned by our unwavering commitment to developing and deploying AI systems responsibly, ensuring they are trustworthy, transparent, and aligned with our ethical standards.


Learn more about Envestnet's Approach to AI and our AI Acceptable Use Policy below.

Envestnet is committed to responsible AI by embedding the following guiding principles into our use of AI applications and processes:

Sustainable Growth and Development

  • AI should contribute to inclusive growth and the well-being of people and the planet.

Human-Centered Privacy and Fairness

  • AI must respect human rights, values, and fairness, ensuring no group is unfairly treated and that AI systems do not perpetuate bias or discrimination.

Transparency and Explainability

  • AI systems must be transparent. Users need to understand how decisions are made and why.

Robustness, Security, and Safety

  • AI should be robust and secure, with safeguards to ensure it behaves as intended.

Accountability

  • Those developing AI are responsible for its impact and must be accountable for their systems through traceability and systematic risk management.

AI Development, Deployment, and Continuous Improvement Activities

  • Deployment Policies, Standards and Guidelines: Provide developers and deployers of AI systems with clear guidance and standards aligned to our approach for the responsible use of AI, including planning and testing configurations prior to deployment.
  • Data Quality: Ensure data used in AI systems is accurate, relevant, representative and of high quality, with regular assessments against key performance indicators such as impacts on the product lifecycle and associated processes that may conflict with organizational values or exacerbate biased or discriminatory outcomes.
  • Risk Assessments: Conduct AI risk assessments at multiple stages of the AI lifecycle, particularly for new or updated applications, maintaining clear organizational roles and responsibilities for identifying and mitigating potential risks based on established tolerance levels.
  • Impact Assessments: Implement ethics, human rights, and privacy impact assessments to evaluate potential impacts and benefits where more sensitive information is in scope, including evaluating the qualitative and quantitative costs of internal and external AI system failures.
  • Privacy Preserving: Maintain consistency with established and enterprise-level privacy practices and expectations, including notice and transparency, consumer individual rights and control over personal information, data minimization, purpose specification and data retention.
    • To learn more about our general privacy and data practices and commitments, visit the Privacy page.
  • Cyber Security Conscious: Leverage effective controls related to the protection of data, including strong role-based access management and encryption strategies for data at rest and in transit in alignment with industry standards.
  • Supply Chain Due Diligence: Follow established procurement processes to ensure any AI systems leveraging third party models contractually requires the vendor to provide at least the same level of protection as regularly provided internally.
  • Feedback Mechanisms and Stakeholder Engagement: Establish channels for feedback from users and stakeholders to continuously improve AI systems and address validity, design, explainability and interpretability concerns.
  • Ongoing Integrity Monitoring: Implement continuous monitoring and regular audits of AI systems with a human in the loop to ensure they are comprehensively inventoried, operate as intended and that any unintended impacts are addressed.
  • Employee Training Programs: Provide mandatory and ongoing training on AI ethics and responsible AI practices to empower all employees involved in and responsible for AI development, deployment and risk management.
  • Public Stakeholder Awareness and Education: Engage with stakeholders, including clients and regulators, to build trust in AI systems and ensure transparency related to AI system use and known limitations.
  • Regular Compliance Reviews: Conduct regular reviews of AI systems and policies to adapt to technological advancements and regulatory changes, informing key personnel of updated considerations and requirements.

Compliance and Governance Oversight

Envestnet employs the oversight of an AI Governance Committee for all AI-related systems and initiatives. The committee is comprised of a cross-functional group of subject matter experts from across the organization and is tasked with overseeing and guiding the development, deployment, and continuous improvement of AI systems both internally and externally. The committee ensures that AI initiatives are centrally inventoried and align with Envestnet’s ethical standards, client expectations, regulatory requirements, and strategic objectives, fostering trust and transparency in AI technologies.

As AI technologies continue to evolve, Envestnet will continuously assess and adapt our practices to capitalize on these advances while managing associated risks. We remain committed to responsible AI practices and our dedication to innovation while maintaining the trust of our clients and stakeholders.

Learn more in the AI Acceptable Use Policy