Enterprise AI FAQ: Answers to the 20 Most Common Questions

By

We answer these questions repeatedly. Here they are with direct answers.

Strategy Questions

Q: Should we build or buy our AI capability?

A: For the model layer: buy (use existing LLMs). For the knowledge layer: build or partner (it requires your specific organizational context). For applications: mix (some standard, some custom).

The knowledge layer—entity resolution, relationship mapping, business rules—is where your organizational specificity lives. You can't buy that off the shelf because it doesn't exist. You build it with a platform or partner.

Detailed analysis →

Q: How long does enterprise AI deployment take?

A: Meaningful capability: 3-6 months. Mature, scaled deployment: 12-18 months.

You can get AI running in weeks. Getting AI that's accurate on internal data, trusted by users, and integrated into workflows takes longer. Budget for the real timeline, not the demo timeline.

90-day quick win playbook →

Q: What ROI should we expect?

A: Well-implemented enterprise AI typically delivers 3-5x ROI through productivity gains, accuracy improvements, and knowledge preservation. But poorly implemented AI delivers negative ROI through wasted investment and wrong decisions.

The difference is context infrastructure. AI without organizational understanding produces costs. AI with understanding produces value.

ROI calculation guide →

Q: What's the biggest reason enterprise AI projects fail?

A: The #1 reason: treating data access as equivalent to data understanding. Companies connect AI to data and expect intelligence to emerge. It doesn't. AI needs context to interpret data correctly.

The #2 reason: scope creep that prevents proving value on anything.

Security and Privacy Questions

Q: Can we use AI without sending data to external providers?

A: Yes. On-premise deployment with open-source models (Llama, Mistral) keeps all data within your perimeter. Your data never leaves your network.

This is the standard approach for regulated industries, defense contractors, and organizations with sensitive IP.

Q: How do we ensure AI doesn't expose sensitive information?

A: Three mechanisms:

  1. Access control: AI only returns information the user is authorized to see
  2. Data classification: Sensitive data is tagged and handled appropriately
  3. Deployment model: On-premise deployment eliminates external exposure

Security architecture guide →

Q: What about GDPR/HIPAA/SOC 2 compliance?

A: All achievable with proper architecture:

  • GDPR: On-premise deployment within your perimeter eliminates data transfer concerns
  • HIPAA: PHI stays within your environment, no BAA needed with AI vendors if self-hosted
  • SOC 2: AI systems integrate with existing security controls

Compliance playbook →

Technical Questions

Q: What's the difference between RAG and knowledge graphs?

A: RAG retrieves documents. Knowledge graphs understand entities and relationships.

RAG is good for: "Find documents about X" Knowledge graphs are good for: "Tell me about our relationship with entity X"

Most enterprises need both.

Detailed comparison →

Q: Do we need to fine-tune models on our data?

A: Usually no. Fine-tuning teaches models style and patterns, not facts. For organizational knowledge, knowledge graphs are more effective, faster to update, and more auditable.

Fine-tune only if you need the model to write in a specific style or handle domain-specific language that base models don't know.

Q: What's entity resolution and why does it matter?

A: Entity resolution is understanding that "Acme Corp," "ACME," "Customer 4412," and "the Acme account" all refer to the same entity.

Without it, AI gives fragmented, incomplete, and inconsistent answers about your organizational entities. With it, AI can provide complete, accurate responses.

Entity resolution is the core function of a knowledge graph.

Q: Why doesn't ChatGPT know about our company?

A: ChatGPT is trained on public internet data. Your internal reality—org structure, customer relationships, processes, terminology—isn't on the public internet.

To make AI know about your company, you need to build knowledge infrastructure that captures your organizational context.

Implementation Questions

Q: Where should we start?

A: Start with a high-value, contained use case:

  1. Identify a specific team with a knowledge problem
  2. Define success metrics upfront
  3. Build knowledge coverage for their domain
  4. Deploy, measure, iterate
  5. Expand based on success

Don't try to transform the enterprise on day one.

Q: What data do we need to provide?

A: At minimum:

  • Core entity data (customers, products, projects, people)
  • Key relationships (who owns what, what connects to what)
  • Critical documents (policies, procedures, key content)

You don't need everything. Start with what matters most for your initial use case and expand.

Q: How do we maintain AI accuracy over time?

A: AI accuracy degrades as your organization changes unless knowledge is updated. Required:

  • Change detection mechanisms
  • Knowledge update processes
  • Feedback loops that capture corrections
  • Periodic knowledge review

Budget for ongoing maintenance, not just initial build.

Q: What team do we need?

A: For a serious enterprise AI program:

  • Executive sponsor (business leadership)
  • Product owner (defines requirements and priorities)
  • Data/knowledge engineers (build knowledge infrastructure)
  • ML/AI engineers (if self-hosting models)
  • Change management (drive adoption)

Exact size depends on ambition. Start smaller and expand.

Adoption Questions

Q: How do we get people to actually use AI?

A: Change management determines success. Key elements:

  • Executive modeling (leaders use AI visibly)
  • Clear value proposition (what's in it for users)
  • Training that fits workflows
  • Champions in each team
  • Feedback incorporation
  • Recognition for adoption

Build it and they will come doesn't work.

Q: What if users don't trust AI answers?

A: Trust requires:

  • Accuracy: AI must actually be right. 85%+ accuracy is the threshold for trust.
  • Transparency: Users should see where answers come from
  • Verification options: Easy way to check sources
  • Correction mechanisms: When AI is wrong, users can fix it
  • Improvement over time: Users see AI getting better

Trust is earned through consistent performance, not promised through marketing.

Q: How do we handle AI errors?

A: AI will make errors. Handle them with:

  • Easy error flagging (thumbs down, "wrong" button)
  • Fast response to flagged errors
  • Root cause analysis (why did this fail?)
  • Knowledge updates to prevent recurrence
  • Communication to users about fixes

Errors are learning opportunities. The goal is fewer errors over time, not zero errors on day one.

Vendor Questions

Q: How do we evaluate AI vendors?

A: Key questions:

  1. How do they handle entity resolution?
  2. Do they build knowledge graphs or just RAG?
  3. Can they deploy on-premise?
  4. What's accuracy on internal entity questions?
  5. How do they handle updates and maintenance?

Avoid vendors who only demo well but can't explain how they achieve accuracy.

12 evaluation questions →

Q: Should we use multiple AI tools or consolidate?

A: Consolidate the knowledge layer; diversify at the application layer.

One knowledge infrastructure (entity resolution, relationships, facts) feeds multiple applications. Different applications may serve different use cases, but they should share organizational understanding.

Q: What's the typical pricing model?

A: Models vary:

  • Per-user: Common for SaaS products
  • Per-query: API-style pricing
  • Platform license: Annual fee for capability
  • Professional services: Implementation and customization

Budget for implementation (often 30-50% of Year 1 cost) and ongoing operations (15-25% of license annually).

The Meta-Question

Q: Is enterprise AI actually ready for production?

A: Yes—with appropriate expectations.

AI is ready for:

  • Knowledge Q&A with proper context infrastructure
  • Document analysis and summarization
  • Process automation with human oversight
  • Decision support (not autonomous decision-making)

AI is not yet ready for:

  • Fully autonomous high-stakes decisions
  • Perfect accuracy without human verification
  • Replacement of expert judgment

Deploy for augmentation, not replacement. Build appropriate safeguards. Iterate toward greater capability over time.


Have a question not answered here? Book a call →

Ready to make AI understand your data?

See how Phyvant gives your AI tools the context they need to get things right.

Talk to us